← Back to MyPeeps.ai

Why We Built a Better Personality Test

The biggest personality tests in the world have a dirty secret: the science behind them is shaky, the prices are steep, and they haven't meaningfully evolved in decades. We think you deserve better.

What You Actually Get

Most personality platforms paywall the good stuff. We don't. Here's what every user gets — completely free.

🧭

Two Complete Quizzes

Philosophy (Mind) and Emotional Intelligence (Heart). 8 questions each, 2 minutes. No signup required.

📊

Full Team Profile

Strengths, weaknesses, career paths, stress patterns, compatibility with all other types, famous members — no paywall.

🎁

Free Digital Downloads

Custom tarot card or poster — yours instantly on quiz completion. No email required, no strings attached.

🏆

Live Community

Real-time leaderboard, team voting, XP levels, 15 achievement badges, daily streaks, global nationality tracking.

What you get for freeMyPeeps16PersonalitiesTruity
Full personality resultsPartialPartial
Detailed strengths & weaknessesPaid ($50+)Paid ($19–$80)
Career guidancePaidPaid
Compatibility profilesPaidPaid
Stress & growth insightsPaidPaid
Free digital download on completion
No advertisementsAds
No data sellingUndisclosedAd-supported
Community & gamificationForums
Premium reports$1+$50+$19–$80

We're donation-based. If you find value, grab a premium report, wallpaper pack, or bundle for as little as $1. Everything else is free, forever.

The Problem with Legacy Personality Tests

MBTI: 50% of People Get a Different Result on Retest

The U.S. National Academy of Sciences reviewed Myers-Briggs research and concluded there was "not sufficient, well-designed research to justify the use of MBTI in career counseling programs" (Druckman & Bjork, 1991). Approximately half of test-takers receive a completely different 4-letter type when they retake the test weeks later.

Why? Personality traits are normally distributed — most people fall near the middle of each dimension. MBTI forces an arbitrary binary split, so anyone near the midpoint flips randomly on retest. This was demonstrated by McCrae & Costa (1989) in the Journal of Personality, who showed the distributions are unimodal, not bimodal — meaning the "types" don't actually exist as distinct categories.

EQ Tests: Even Shakier

The most widely used clinical EQ test (MSCEIT) has a fundamental problem: up to 76% of its variance is explained by personality traits, general intelligence, and demographics — raising the question of what it's actually measuring (Fiori & Antonakis, 2011).

The Bar-On EQ-i produced an even more startling result: prison offenders scored higher than the general population (Ermer et al., 2012). A systematic review of 40 EQ instruments found that many are "statistically equivalent to a magazine quiz" (Sanchez-Garcia et al., 2022).

A Multi-Billion Dollar Industry with Misaligned Incentives

The personality assessment industry is valued at over $5 billion. The commercial incentive is to create long, proprietary tests that require paid certifications, expensive per-seat licenses, and lock users into ecosystems. Meanwhile, peer-reviewed research consistently shows that free, public-domain instruments match or exceed commercial ones in validity.

The International Personality Item Pool (IPIP) — a free, open-source collection of 3,329 items — achieves a mean reliability of .80, surpassing the commercial NEO PI-R's .75 (Goldberg et al., 2006). Price is not a proxy for scientific quality.

What the Science Actually Supports

Shorter Tests Produce Better Data

The landmark study by Gosling, Rentfrow & Swann (2003) demonstrated that a 10-item personality inventory matches the validity of a 44-item instrument, with convergent correlations of .65 to .87. It has been cited over 9,500 times.

Burisch (1984) showed in the American Psychologist that "most existing personality questionnaires can be shortened considerably without any consequent loss in validity" — and that single-item self-ratings sometimes outperform full questionnaires.

Why? Survey fatigue. Research shows that longer assessments cause straight-line responding and question-skipping, with an additional hour of survey time increasing skip rates by 10–64% (PMC, 2025). A shorter instrument answered honestly produces more reliable data than a longer one answered carelessly.

2 Questions Per Axis Is the Validated Minimum

Our quiz uses 8 questions across 4 binary dimensions — exactly 2 per axis. This is the scientifically validated minimum threshold identified by Crede et al. (2012) in the Journal of Personality and Social Psychology, who found "substantial improvements in criterion validity when going from one to two items per scale."

Multiple validated instruments use exactly this approach: the TIPI (2 items/trait, Gosling et al., 2003) and the BFI-10 (2 items/trait, Rammstedt & John, 2007), both widely used in published research.

Binary Choices Are More Honest

Two independent meta-analyses — covering over 100,000 participants — confirm that forced-choice formats are dramatically more resistant to faking than traditional Likert scales.

Cao & Drasgow (2019) found score inflation of just d=0.06 with forced-choice — essentially zero faking. Martinez & Salgado (2021) confirmed this across 82 samples: forced-choice personality inventories show superior faking resistance.

When you ask someone "Are you more drawn to discipline or passion?" — there's no obvious "right" answer to game. Both options are desirable. That's by design.

What We Believe

Self-Knowledge Should Be Free

We give everything away: your full quiz results, your team profile, your detailed personality report. If you find value in what we built and want to support the project, you can grab a premium resource for as little as $1. That's it. No pressure, no dark patterns, no "unlock your FULL results for $49.99."

Community Over Commerce

Most personality platforms treat you as a lead to convert. We treat you as a team member. Our leaderboard, voting system, badges, and streaks exist because understanding yourself is more fun — and more accurate — when it's social. You're not a data point. You're a person who thinks a certain way, and there are others like you.

You are not weird — at least not alone.

Open to Everyone

Schools, universities, NGOs, therapists, coaches, community organizations — you are welcome to use our quizzes, team frameworks, and resources in any way you see fit, including commercially. No license fees. No permission needed. We believe personality tools should be accessible to anyone working to help people understand themselves better.

References

Every claim on this page is backed by peer-reviewed research. Here are the papers we referenced:

  1. Druckman, D. & Bjork, R. A. (Eds.) (1991). In the Mind's Eye: Enhancing Human Performance. National Academy Press. NAP
  2. McCrae, R. R. & Costa, P. T. (1989). Reinterpreting the Myers-Briggs Type Indicator from the Perspective of the Five-Factor Model of Personality. Journal of Personality, 57(1). DOI
  3. Gosling, S. D., Rentfrow, P. J., & Swann, W. B. (2003). A Very Brief Measure of the Big Five Personality Domains. Journal of Research in Personality, 37, 504–528. DOI
  4. Burisch, M. (1984). Approaches to Personality Inventory Construction. American Psychologist, 39, 214–227. DOI
  5. Crede, M., Harms, P., Niehorster, S., & Gaye-Valentine, A. (2012). An Evaluation of the Consequences of Using Short Measures of the Big Five Personality Traits. Journal of Personality and Social Psychology, 102, 874–888. DOI
  6. Rammstedt, B. & John, O. P. (2007). Measuring Personality in One Minute or Less. Journal of Research in Personality, 41, 203–212. DOI
  7. Cao, M. & Drasgow, F. (2019). Does Forcing Reduce Faking? A Meta-Analytic Review. Journal of Applied Psychology, 104(11), 1347–1363. DOI
  8. Martinez, A. & Salgado, J. F. (2021). A Meta-Analysis of the Faking Resistance of Forced-Choice Personality Inventories. Frontiers in Psychology, 12, 732241. DOI
  9. Fiori, M. & Antonakis, J. (2011). The ability model of emotional intelligence: Searching for valid measures. Personality and Individual Differences, 50(3), 329–334. DOI
  10. Ermer, E., Kahn, R. E., Salovey, P., & Kiehl, K. A. (2012). Emotional intelligence in incarcerated men. Personality and Individual Differences, 52(4). DOI
  11. Sanchez-Garcia, M. et al. (2022). Emotional Intelligence Measures: A Systematic Review. Healthcare, 10(10). PMC
  12. Goldberg, L. R. et al. (2006). The International Personality Item Pool and the future of public-domain personality measures. Journal of Research in Personality, 40, 84–96. DOI
  13. Simms, L. J. et al. (2019). Does the Number of Response Options Matter? Psychological Assessment, 31(4), 557–566. DOI

Ready to Find Your Team?

8 questions. 2 minutes. No paywall. No ads. Just you and your people.