Beaufort Intelligence
Beaufort Intelligence
Position Paper

AI & Astrology: An Ethical Framework

Principles for the responsible application of artificial intelligence to astrological analysis, published before they are urgently needed

Jack Beaufort
Jack Beaufort Beaufort Intelligence, United Kingdom contact@beaufort-intelligence.com
Abstract

This paper establishes an ethical framework for the application of artificial intelligence to astrological analysis. It argues that responsible development requires proactive constraint-building rather than reactive regulation, drawing parallels with the histories of intelligence testing, personality assessment, credit scoring, and genetic data. Birth data is immutable: unlike a password or address, a person cannot change their date, time, and place of birth. Any system that derives meaningful psychological or behavioural analysis from that data must operate within clear ethical constraints regarding consent, determinism, third-party use, and the right to not know. The paper proposes six principles for responsible AI-astrology and situates them within existing regulatory frameworks including the EU AI Act and APA guidance on technology in professional practice.

Keywords: AI ethics, astrology, birth data, consent, data sovereignty, determinism, EU AI Act, GDPR, responsible scaling, computational astrology
1. Introduction

This framework exists because someone should write it before someone has to.

The application of artificial intelligence to astrological analysis is not a hypothetical. It is happening now. Consumer platforms generate daily horoscopes using large language models. Apps like Co-Star serve millions of users with algorithmically produced natal chart interpretations. Beaufort Intelligence uses AI within strict architectural constraints to produce structured psychological analysis from natal chart data. The question is no longer whether AI will be applied to astrology. The question is whether anyone will build the ethical constraints before the technology outpaces them.

Anthropic, the AI laboratory, published its Responsible Scaling Policy before its most capable models were deployed, arguing that safety frameworks must be established proactively because reactive regulation always arrives too late.[1] The same logic applies here. If computational astrology matures to a point where its analytical outputs carry predictive weight, the implications for privacy, consent, discrimination, and autonomy are significant. Building the ethical constraints after that point is reached means building them after damage has already been done.

The history of analytical tools being repurposed as instruments of discrimination is not speculative. It is documented, repeated, and instructive.

2. When Analytical Tools Become Weapons

IQ tests were designed for educational assessment. Lewis Terman adapted the Binet-Simon scale at Stanford and promoted its use for identifying children who needed additional support. Within a decade, the same instrument was being used to justify forced sterilisation programmes across the United States, classify racial groups as intellectually inferior, and restrict immigration from Southern and Eastern Europe. Terman himself was a member of the Human Betterment Foundation and the American Eugenics Society.[2] The tool did not change. Its application did.

Personality assessments followed the same trajectory. The Myers-Briggs Type Indicator and the Big Five were developed for self-understanding and clinical insight. They are now used by employers to screen job candidates, a practice that the American Civil Liberties Union has challenged in formal complaints against hiring technology vendors, arguing that these assessments produce discriminatory outcomes that disproportionately exclude qualified candidates from protected groups.[3] In 1971, the U.S. Supreme Court ruled in Griggs v. Duke Power Co. that employment tests with discriminatory impact violate civil rights law regardless of whether the employer intended to discriminate.[4] Five decades later, the same pattern is being repeated with AI-driven assessments.

Credit scoring algorithms, trained on historically biased data, systematically disadvantage minority borrowers. Research from the Stanford Institute for Human-Centered Artificial Intelligence has shown that credit scores are 5 to 10 percent less accurate for minorities and low-income borrowers, not because of algorithmic bias per se, but because the data these algorithms are trained on reflects decades of structural inequality.[5]

Genetic data offers the closest analogy to birth data. Before the Genetic Information Nondiscrimination Act was passed in the United States in 2008, employers and insurers could use genetic test results to deny coverage or employment.[6] It took thirteen years of legislative effort, from the first bill introduced in 1995 to the law's enactment, to establish basic protections. And GINA still does not cover life insurance, disability insurance, or long-term care. Genetic data, like birth data, is immutable. The person cannot change it. The potential for discriminatory application existed from the moment the data became analytically useful.

13 years
from the first bill introduced to protect genetic information (1995) to the enactment of GINA (2008). The legislative process moved slower than the technology it was trying to regulate. Birth data protections do not yet exist.
U.S. Genetic Information Nondiscrimination Act, Public Law 110-233

The pattern across all four cases is identical. A tool is built for understanding. It is repurposed for classification. Classification becomes discrimination. Regulation arrives years or decades after the harm. The people who built the tool did not intend the harm, but the absence of proactive constraints made it inevitable.

3. The Immutability Problem

Birth data occupies a unique position in the landscape of personal information. A person's date, time, and place of birth cannot be changed. Unlike a password, it cannot be reset. Unlike an address, it cannot be relocated. Unlike a name, it cannot be amended by deed poll. It is fixed at the moment of birth and remains constant for the duration of a person's life.

Under GDPR, date of birth is classified as personal data, and location data is explicitly covered.[7] The full triad of birth date, birth time, and birth location does not currently fall within GDPR's Article 9 special categories (which cover genetic data, biometric data, health data, and religious or philosophical beliefs). But the combination of these three data points enables the generation of a complete natal chart, from which psychological and behavioural analysis can be derived. If that analysis becomes demonstrably accurate, the data that generates it becomes sensitive in practice, whether or not current regulation classifies it as such.

Consider, briefly, what becomes possible if computational astrology produces outputs with predictive validity. Insurance companies could request birth data alongside medical history, screening for natal configurations associated with risk-taking behaviour or health vulnerabilities. Employers could filter candidates by natal chart compatibility with company culture or team composition. Dating applications could sort potential matches by synastry scores. Governments have already used astrology for strategic planning: Nancy Reagan consulted astrologer Joan Quigley to influence President Reagan's schedule throughout his presidency, a practice documented in Quigley's own memoir and confirmed by Reagan's Chief of Staff, Donald Regan.[8]

These scenarios are not science fiction. They are the logical extension of the same trajectory that transformed IQ tests, personality assessments, credit scores, and genetic data from tools of understanding into instruments of gatekeeping. The only variable is whether the constraints are built before or after.

4. The Epistemic Threshold

The scenarios described above are institutional. They concern how organisations might misuse astrological data. But there is a deeper question that most ethical frameworks in this space have not confronted, because confronting it requires taking the subject matter seriously enough to follow the logic to its conclusion.

If computational astrology, augmented by artificial intelligence with sufficient pattern recognition and synthesis capability, were to produce outputs that are reliably and demonstrably accurate, the consequences extend far beyond data misuse. The entire materialist-rationalist framework that underpins Western science, medicine, law, and economics would have a hole in it. Not a small one. If planetary transits to natal positions can reliably predict specific psychological states or life events, that implies some form of non-local, non-causal connection between celestial mechanics and individual human experience. That is not an adjustment to the existing paradigm. It is a replacement of it.

The response would likely arrive in stages. Initial denial and debunking attempts from the scientific establishment, following the pattern documented in the accompanying literature review. Then, once the statistical evidence became undeniable, a fracture: one camp attempting to absorb the findings into physics through field theory or information-theoretic explanations, another interpreting them as vindication of pre-Enlightenment cosmology. Religious institutions would split on whether the findings confirm or threaten their own frameworks. The epistemological ground would shift beneath every discipline that depends on a mechanistic model of causation.

The practical consequences arrive faster than the philosophical ones, and they are darker. The commodification would be immediate and ruthless. Insurance companies would want birth charts. Employers would screen by natal placements. Courts might consider transit data as mitigating circumstances. Dating platforms would filter by synastry. Military and intelligence agencies would apply mundane astrology to strategic planning. Every institution that currently uses data to predict and sort human behaviour would have access to a new dataset that the individual cannot change, cannot opt out of, and may not even know is being used. The psychological depth that makes reflective astrological analysis valuable would be stripped out entirely, reducing a framework for self-understanding to a deterministic sorting tool.

There is also the question of what reliable predictive accuracy would do to the people receiving the analysis. If difficult transits can be identified in advance with genuine precision, a proportion of recipients would experience anticipatory fatalism. The capacity to see difficulty coming does not, for most people, produce equanimity. It produces anxiety. For individuals already experiencing psychological vulnerability, the delivery of accurate but distressing predictive information without adequate contextualisation and safeguarding could cause real harm. Any system operating in this space has an obligation to build protections against this, regardless of its current level of accuracy.

And the power asymmetry would be significant. Whoever controls the most capable interpretation models would control a form of predictive intelligence that makes every other forecasting methodology look primitive. Financial markets, geopolitical strategy, resource allocation. The concentration of that capability in any single entity, whether a corporation, a government, or an individual, would represent an asymmetry of information without historical precedent.

None of this is certain. The question of whether astrological analysis can produce outputs with genuine predictive validity remains open, and Beaufort Intelligence's research programme is designed to investigate it with methodological rigour. But the ethical framework cannot wait for the answer. If the constraints are built only after demonstrable accuracy is achieved, they will be built after the insurance companies, the employers, and the intelligence agencies have already begun using the data. The history of every analytical technology described in this paper confirms that reactive regulation arrives too late. The constraints must exist before they are needed.

5. Principles for Responsible AI-Astrology

The following principles govern Beaufort Intelligence's approach to applying artificial intelligence to astrological analysis. They are published openly and are intended to be adopted, adapted, or challenged by anyone building in this space.

  1. I. Consent and Data Sovereignty
    Birth data belongs to the person it describes. It must be provided voluntarily, stored securely, and never shared with third parties without explicit consent. The person retains the right to request deletion of their data and all analysis derived from it at any time. No birth data should be collected, inferred, or processed without the informed consent of the individual whose data it is.
  2. II. No Deterministic Outputs
    Astrological analysis must use probabilistic language and reflective framing. A natal chart describes conditions, not certainties. Analysis should support self-understanding, not prescribe outcomes. Outputs must never state that a person will behave in a particular way, that a relationship will fail, that a career path is predetermined, or that any life outcome is fixed by natal configuration. The framework is diagnostic, not prophetic.
  3. III. No Third-Party Profiling
    A person's natal chart analysis must only be accessible to that person, or to parties they have explicitly authorised. No system should generate, store, or transmit analysis of an individual's birth data for the purpose of assessment, screening, ranking, or classification by any third party, including employers, insurers, educational institutions, law enforcement, or government agencies.
  4. IV. Transparency of Method
    The analytical process must be auditable. A reader with astrological knowledge should be able to trace any claim in a report back to the specific chart configurations that generated it and evaluate whether the interpretation is warranted. Black-box analysis, where outputs cannot be traced to inputs, is incompatible with responsible practice. If the system cannot explain why it produced a particular conclusion, it should not produce that conclusion.
  5. V. Psychological Safeguarding
    Any system that delivers psychological analysis to individuals must integrate safeguarding measures designed to minimise the risk of harm to vulnerable recipients. This includes contextualisation of sensitive material, clear signposting to professional support services where appropriate, explicit warnings before the delivery of content that addresses trauma, shadow material, or psychologically challenging themes, and informed consent mechanisms that ensure the recipient understands the nature of the analysis before receiving it. The capacity to generate psychologically penetrating analysis creates a duty of care that scales with the depth and accuracy of the output.
  6. VI. Right to Not Know
    A person has the right to decline specific categories of analysis. If a report system is capable of analysing shadow material, trauma patterns, relationship dynamics, or health vulnerabilities, the individual must be able to opt out of any category they do not wish to receive. The capacity to generate analysis does not create an obligation to deliver it. Consent to one form of analysis does not constitute consent to all forms.
6. Regulatory Context

These principles do not exist in a regulatory vacuum. The European Union's Artificial Intelligence Act, which entered into force in 2024, classifies AI systems that affect fundamental rights and psychological state as high-risk under Article 6 and Annex III.[9] A system that generates psychological analysis from personal data and delivers it directly to the individual it describes falls squarely within this classification. High-risk systems under the EU AI Act are subject to enhanced requirements for transparency, data governance, human oversight, and risk management, requirements that align with the principles outlined above.

The American Psychological Association published guidance in 2024 on the ethical use of AI in professional practice, emphasising that AI tools used in psychological contexts must maintain the same ethical standards as human practitioners, including informed consent, competence boundaries, and avoidance of harm.[10] Whether astrological analysis constitutes "psychological practice" is a definitional question that current regulation has not addressed. The prudent position is to operate as though it does.

The gap in existing regulation is not in the principles but in their application. GDPR covers birth data as personal data but does not classify it as sensitive. The EU AI Act addresses high-risk psychological systems but has not considered astrological analysis as a use case. APA guidance covers AI in professional practice but does not extend to non-clinical analytical tools. In every case, the regulatory framework has been built around existing technologies and existing categories of harm. Computational astrology sits in the space between those categories, which is precisely why a proactive framework is necessary.

7. Conclusion

The difference between building powerful tools and building responsible ones is not capability. It is constraint. Every analytical technology that has caused significant harm did so not because its creators were malicious but because the constraints were built after the applications, not before them. IQ tests existed for decades before the eugenics programmes they enabled were dismantled. Personality assessments have been used in discriminatory hiring for years while legal challenges are still working through the courts. Credit scoring algorithms continue to disadvantage the populations they were never designed to serve while regulators debate definitions.

Computational astrology is in its earliest stages. The question of whether natal chart analysis produces outputs with genuine predictive validity remains open and is being investigated through Beaufort Intelligence's formal research programme. But the ethical constraints should not depend on the answer to that question. If the analysis is accurate, the constraints protect people from its misuse. If it is not accurate, the constraints protect people from being harmed by false confidence in its outputs. In either case, the framework is necessary.

This document is published openly. It will be updated as the technology develops, as regulatory frameworks evolve, and as the research programme produces data that informs the ethical implications of this work. It is not a final statement. It is a starting position, established on record, before it is urgently needed.

Continue Reading
Our Research Programme
Blind forced-choice methodology, BPS compliance, and a commitment to publishing results regardless of outcome. The study design for testing whether astrological analysis produces accuracy distinguishable from chance.
Read next ›
References
  1. Anthropic. (2023). "Responsible Scaling Policy," v1.0. Updated 2024 (v3.0).
    anthropic.com/responsible-scaling-policy
  2. Stanford Daily. (2019). "Eugenics on the Farm: Lewis Terman." See also: Stanford Magazine, "The Vexing Legacy of Lewis Terman."
    stanforddaily.com
  3. American Civil Liberties Union. (2024). FTC Complaint regarding Aon Consulting, Inc. See also: ACLU, "The Long History of Discrimination in Job Hiring Assessments" (2023).
    aclu.org
  4. Griggs v. Duke Power Co., 401 U.S. 424 (1971). U.S. Supreme Court.
    supreme.justia.com/cases/federal/us/401/424
  5. Blattner, L. & Nelson, S. (2021). "How Flawed Data Aggravates Inequality in Credit." Stanford Institute for Human-Centered Artificial Intelligence.
    hai.stanford.edu
  6. Genetic Information Nondiscrimination Act of 2008 (GINA). Public Law 110-233. U.S. Equal Employment Opportunity Commission.
    eeoc.gov
  7. Regulation (EU) 2016/679, General Data Protection Regulation (GDPR). Article 4(1): Definition of personal data. Article 9: Special categories of personal data.
    gdpr-info.eu/issues/personal-data
  8. Quigley, J. (1990). What Does Joan Say?: My Seven Years as White House Astrologer to Nancy and Ronald Reagan. Carol Publishing Group. See also: Regan, D. (1988). For the Record. Harcourt Brace Jovanovich. Hoover Institution archives: Papers of Joan Quigley.
    hoover.org
  9. European Parliament. (2024). Regulation (EU) 2024/1689, Artificial Intelligence Act. Article 6 and Annex III: High-Risk AI Systems Classification.
    artificialintelligenceact.eu/article/6
  10. American Psychological Association. (2024). "APA's AI Tool Guide for Practitioners." See also: APA Council of Representatives, "Policy Statement on Artificial Intelligence and the Field of Psychology" (2024).
    apa.org