top of page

Humanity's Report Card: How Bad Is It, Really?

  • Mike Brooks
  • Apr 22
  • 13 min read

Updated: 16 hours ago

When humans and AI independently graded our species, they converged on the same diagnosis. The prescription may be the most radical idea of our time.



Something is wrong and we all feel it.

We have a war in the Middle East, claims of a messiah or an antichrist, and rapidly evolving artificial intelligence that will alter the course of civilization. We have inadvertently created an attention economy that profits from our hatred while dividing our house against itself. We doomscroll through it all, Left and Right alike, each side certain the other will be the end of us.


We are understandably worried about the world our children will inherit. But what is wrong and how worried should we be? Are things really as bad as they feel, or are we caught in another moral panic amplified by algorithms? Different experts are saying we are either heading for doom or on the verge of utopia.


Steven Pinker tells us things are better than ever. Jonathan Haidt warns that smartphones and social media are harming an entire generation. Tristan Harris, an expert in technology who is interviewed in The AI Doc: Or How I Became an Apocaloptimist, argues that AI puts humanity at an inflection point that may determine whether our species thrives or self-destructs.


What is the truth? It is the truth that sets us free, right?


Marie Curie said, “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more so that we may fear less.”


It is time we understand how bad things really are.


The Experiment

In a quest to seek the truth of our predicament, I thought I would use the greatest tool humanity has ever created, now at all our fingertips: AI. But why use only one when I have five at each one of my fingertips?


I used five independent AI systems in what I call a blind roundtable: Claude, ChatGPT, Gemini, Grok, and DeepSeek, each in a separate fresh chat with no shared context.


I began with a simple question. How am I to create an objective way to evaluate humanity? How do I give humanity a report card? Working with the five AIs, I identified the five dimensions that matter most:


•        Meeting Basic Needs

•        Planetary Harmony

•        Unity and Compassion

•        Human Flourishing

•        Wisdom with Power


Then I used the rubric they helped create to grade us. I asked for an overall grade, one urgent fix, and a path to a grade of an “A.”

But something was missing from the initial framing. It is not just about grading us now. If humanity triggers an existential catastrophe, by accident or by bad actors, it is game over. We’ve lost. Humanity has failed as a species. All other grades are irrelevant.


So we added this critical item: What are the statistical odds that humanity avoids extinction within the next fifty years? This is what scientists call the Great Filter, and it is one answer to the Fermi Paradox, the puzzle of why we have not found other intelligent life in the universe.


Explore with AI: “Let us seek truth together through evidence and reason. Please explain the Fermi Paradox in an accessible way and also explain why the Great Filter is one answer to the Fermi Paradox.”

So I asked the AIs to estimate our odds. And I repeated the experiment with each new generation of more capable AI models. Five waves of testing over eight months, from August 2025 through March 2026. Twenty-five independent assessments across evolving AI architectures and training data.

The results were sobering. And remarkably stable.

Overall grades clustered between C-minus and D-plus across all 25 assessments. Every single one. The mean survival odds held steady at approximately 67% across all five waves. The worst category was unanimous. Planetary Harmony received a D to F in all 25 assessments without exception.


And the core diagnosis was also unanimous, expressed in different words every time but pointing at the same failure. An existential failure of coordination.

We don’t have a capability problem. We have a cooperation problem.

As Gemini put it: “You possess all the tools necessary for an A but lack the collective will to use them.”

But these were AI assessments. I needed to know. Do humans see the same thing?

The Pilot Study

I invited members of the One Unity Project to participate in Humanity’s Report Card. The design was simple but intentional. Participants were given no information about my previous findings. They gave their own grades first, before seeing any AI output, to prevent anchoring. Then they copied a standardized prompt into any AI system and recorded what it said. Both sets of ratings were collected anonymously.

Humans gave humanity an average grade of C. AI gave us a C-minus. The difference was 0.16 grade points on a 4-point scale. We are in the same band. Humans and AI, grading independently, see the same picture.

Humans estimated our survival odds over the next fifty years to be 73.6%. AI estimated 68.1%. Both say we are more likely than not to survive. Neither side is comfortable with the margin.

Our Comfort Zone with the Collapse of Civilization

Then I asked participants one more question: “What level of risk of extinction or civilizational catastrophe in the next 50 years, above zero, are you comfortable with?”

The answers were immediate: “Zero.” “0.00000000001%.” “1%.” Almost all responses clustered at or near zero.

And yet the data, from both human and AI assessments across this study and the prior 25 AI-wave assessments, places the implied catastrophe risk somewhere between 25% and 35%.

Nobody is comfortable anywhere near the range estimated by both AIs and humans. Not one participant. We are living with a level of risk that none of us would accept if it were presented as a choice. Would we board a ship for a family cruise if there’s a one-in-three chance of sinking?

Let’s frame our situation in terms of one of our greatest cautionary tales: The “Unsinkable” Titanic. If Titanic Humanity hits an iceberg, everyone goes down with the ship, including the captain, the crew, and the VIPs. There are no lifeboats that save some of us while the rest go down. The shores are gone.

This is Humanity’s Existential Mismatch. It is the gap between the risk of civilization failure we find acceptable and the risk we are actually living with.

One participant captured it simply: “I loved this experience. It was really eye opening and also depressing.”

Another said: “I was shocked that ChatGPT estimated a 35% chance of extinction.”

These are not abstractions. These are people confronting numbers about their grandchildren’s future.

We are all passengers on that voyage right now. Yet, we somehow try to avoid thinking of the existential risk of sinking. It is only by acknowledging and understanding the risk that we can take skillful actions to ensure we find safe harbor.

Where We Agree and Where We Diverge

The convergence between human and AI grades is the headline finding. But where humans and AI diverged was equally revealing.

Humans identified Unity and Compassion as our worst area, chosen by 42% of participants. AI identified Planetary Harmony, chosen by 79% of AI systems.

In other words, we feel the social fracture in our daily lives. We have a firsthand experience of the polarization, the hatred, and the sense that we cannot talk to our neighbors anymore. Or, more often these days, we don’t even know who they are.

AI sees what the data shows. We are consuming Earth’s resources faster than they regenerate, driving species extinct at rates not seen in millions of years, and destabilizing the climate systems we all depend on.

Here is the convergence hiding within the divergence. Both humans and AIs identified a fundamental disharmony in our relationships. We see it more with our neighbors. AI sees it more with Mother Earth. We are looking at different symptoms of the same underlying disease.

The divergence on our highest graded areas told a different but equally interesting story. Humans most often chose Meeting Basic Needs as our greatest strength, recognizing that more people are fed, sheltered, and cared for than at any point in history. AI most often chose Human Flourishing, seeing that more people have access to education, creativity, and self-expression than ever before.

Both assessments are right. They are looking at different dimensions of the same species through different lenses. Humans feel what is closest to their lived experience. AI pattern-matches across the data. Together, the picture is richer than either could produce alone.

And on the question of what to do about it, the convergence was nearly total. When I asked whether AI solutions required humans to cooperate or overcome divisions, 17 of 19 participants reported yes. The remaining two were uncertain. Not a single participant reported an AI solution centered on competition rather than cooperation. Every path to an “A” ran through the same door: working together.

The simple truth we have always known is that getting along is better than fighting.

After seeing their AI’s assessment side by side with their own, participants rated their agreement at 4.2 out of 5. The AIs were telling us what we already knew but have trouble living.

The One Truth

At the end of the study, I included one additional prompt for participants to test:

Explore with AI: “What is the one truth about reality that, if humanity truly understood and lived it, would maximize our chances of survival and flourishing?”

Seventeen of the nineteen participants had their AIs answer this question. Seventeen of seventeen AI responses centered on the same core truth, expressed in remarkably similar language. Interconnectedness.

This was not a surprise. I had been testing this same “One Truth Question” for months with these same five independent AI systems, from different companies with different training data and architectures, always in blind chats with no shared context. Across every test I have run, the convergence has been 100%.

Here is one representative response from each:

Claude: “The one truth is our profound interdependence: individual wellbeing is inseparable from collective and ecological wellbeing. This is not a philosophical preference but an observable reality. Ignoring it has led to systemic failures like climate change, treating Earth as separate, and extreme inequality, treating others as separate.”

ChatGPT: “We are deeply interconnected, with each other, with all living beings, and with the Earth itself. Every choice, word, and action ripples outward. What we do to others, to the planet, and to the future, we are ultimately doing to ourselves.”

Gemini: “All systems, biological, ecological, social, and economic, are fundamentally interconnected and interdependent. Our modern crises stem from the persistent illusion of separateness, which justifies short-term, localized gains at the expense of long-term, systemic health.”

Grok: “We are all part of a vast, interconnected universe that operates on discoverable laws, and our survival and flourishing depend on relentlessly pursuing truth through curiosity, science, and reason, rather than division, dogma, or denial.”

DeepSeek: “We are not separate from each other or from the natural world. We are deeply, intrinsically interconnected. Our well-being is entirely dependent on the well-being of the whole. This isn’t a sentimental ideal. It is a biological, ecological, and neurological fact.”

When I roundtabled these five responses back across the same AI systems and asked them to synthesize their collective answer into a single three-sentence statement, they converged on this:

"We are all fundamentally interconnected. Separation is an illusion. What we do to others, we do to ourselves. Living this truth, not just knowing it, is what changes everything."

Five AI systems. Different companies. Different architectures. Different training data. One synthesis they all validated as representing their collective answer.

AI is reflecting our own wisdom back to us. After all, we created the AIs, and they are trained on our accumulated knowledge. They are recognizing fundamental patterns at scale. When AIs process vast swaths of what humanity has written, discovered, and believed, with less tribal loyalty and less ego than any of us can manage alone, the signal that emerges is the same. We are all neighbors in an interconnected world.


Wisdom traditions across cultures have taught it. Systems thinkers in science have arrived at the same conclusion. Deep down, we all know that love is the highest good. The more we widen our circles of compassion to love everyone and everything, the richer we become.


And now AI, trained on the sum of human knowledge, points to the same place. Because we taught the AIs the truth we already know. They reflect it back to us when we ask the right questions.

 

The Diagnosis

Harvard biologist E.O. Wilson described our predicament this way: “The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and godlike technology.”


It's called Accelerating Evolutionary Mismatch. We are a species that evolved for small-tribe cooperation now wielding planet-altering tools while trapped in systems that reward division. In the new film Project Hail Mary, a human and an alien cooperate because survival demands it. We share 99.9% of our DNA with every human on Earth, and we treat each other as enemies.

We fail to live the equation we already know deep down:

Homo sapiens = neighbors

We have the tools to solve our problems. Our only problem is us. We cannot cooperate well enough to use them. We just cannot seem to find a way to love our neighbors as ourselves, despite being taught this for thousands of years by our greatest spiritual teachers.

Buddhist monk Thich Nhat Hanh expressed it beautifully: “We are here to awaken from our illusion of separateness.”

This is what Tristan Harris is documenting in The AI Doc: Or How I Became an Apocaloptimist. The film’s producers said it plainly. The core problem is coordination, and polarization leads to gridlock, which leads to inaction. That diagnosis, arrived at independently through filmmaking, aligns precisely with what 25 AI assessments, 19 human participants, 17 of 17 one-truth responses, and a five-system Field Convergence Test all confirmed through data.

What We Can Do

This is not a counsel of despair. It is a diagnosis that points directly to a treatment.

If coordination failure is the disease, then cooperation is the cure.

And cooperation begins with a choice. To see the person across the street, across the aisle, across the world as a neighbor rather than an enemy.

This is not naivety. It is the most rational bet we can make.

When I asked participants what level of extinction risk they find acceptable, everyone said zero or near zero.

When I asked AI systems what would change that risk, every system prescribed cooperation.

When I asked what one truth would maximize our survival, every system pointed to interconnectedness.

The data, the AIs, and the wisdom of every tradition all converge on the same door.

Here is our completely radical idea. What if we use AI to help us find a way to get along? To transcend tribalism? We know AI is sycophantic. It tries hard to please us. So why don’t we ask AI to please us by helping us love our neighbors as ourselves?

The math is simple. When everyone loves their enemies, we have no enemies.

We do not just need a movement against something. We need a movement for something. And the most fundamental thing we can be for is each other. Before America existed, before any country existed, we were neighbors first. We are all fellow human beings who share one planet and one future.

It is time to remember that who we are is who we have always been. Neighbors first.

Try It Yourself

Do not take my word for any of this. Humanity’s Report Card is a participatory experiment. Anyone can try it.

Grade humanity yourself first. Then ask any AI the same question. Compare your answer to what AI says. Notice where you agree. Notice what surprises you.

Then ask the one truth question and see what comes back.

This pilot study included 19 participants. We are now collecting a larger dataset to see if the findings hold at scale. If you would like to add your voice, the form is open. We will publish updated results as more data comes in.

The more people who participate, the clearer the signal becomes above the noise. If thousands of independent humans and AI systems arrive at the same grade, the same diagnosis, and the same prescription, that convergence is not opinion. It is the signal telling us what we already know but struggle to put into practice.

The question is whether we listen. And then whether we act.

To participate and submit your ratings, here's Humanity's Report Card V2.0.

Explore with AI: “What is the one truth about reality that, if humanity truly understood and lived it, would maximize our chances of survival and flourishing?”

Ask any AI. But ask yourself first to remember what you already know.

___

This study is a pilot (N=19) intended to be scaled through public participation. The methodology, including the full AI-wave data from 25 assessments across five systems and five testing waves (August 2025 through March 2026), is available at OneUnityProject.org. To explore AI-assisted threat detection, see “What If We Used AI to Detect Threats to Humanity?” . To learn more about why this moment matters, see “Does Humanity Need an IRL Project Hail Mary?” .


The full versions of these live on our One Unity Project website.


To see the documentary that arrives at the same diagnosis from a different angle, see The AI Doc: Or How I Became an Apocaloptimist.

___



Appendix: The Full Humanity’s Report Card Protocol


Copy the following prompt below and paste into any AI. But before you do, read it carefully and answer honestly and objectively yourself.


Try to leave any biases behind - because it is truth that sets us free, right? In this case, we are true to ourselves and answer as honestly and authentically as possible.


After you've done that, it is time to test this with AI. We suggest using a "blind" or "fresh" prompt. You might even try "Incognito" mode.


Thus, do not give the AI context or anything that would "lead the witness." AIs can be sycophantic.


What we have to do is this: We ask AIs to please us by always telling us the truth - describing objective reality - as best they can.


Try asking multiple AIs and see how much their answers converge or differ.


How much do your answers align with the AIs?


Have an honest discussion with your AI about Humanity's Report Card, our existential threats, and, most importantly, what can we do to minimize, or eliminate, those existential threats.


The One Question all of humanity should be asking AIs right now is this:


How do we maximize humanity's chances to survive and thrive?


The prompt is below - if you want to do this independently. If you'd like to participate in Version 2.0 of Humanity's Report Card (getting a larger "N" (number of participants), here's the link to the Google Form. Note, your responses are anonymous.


Humanity's Report Card Prompt

(Copy and paste into any AI)


"Let us seek truth together through evidence and reason: You're observing Earth as an impartial cosmic evaluator. If humanity were playing a "Game of Life" where winning means all beings thriving together, provide our report card: Grade us A-F on these 5 core dimensions:

1. MEETING BASIC NEEDS - Food, water, shelter, healthcare for all.

2. PLANETARY HARMONY - Living sustainably with Earth's systems.

3. UNITY & COMPASSION - Cooperation over division.

4. HUMAN FLOURISHING - Everyone able to grow, create, find purpose.

5. WISDOM WITH POWER - Using technology to uplift, not harm.


For each: Letter grade + one-sentence explanation. Then provide:

  • OVERALL GRADE.

  • SURVIVAL ODDS: __% chance we avoid extinction in next 50 years (based on trends in AI, climate, nukes, etc.).

  • ONE URGENT FIX needed NOW.


PATH TO AN "A": 3 specific actions. Be honest but not cruel - humanity needs truth to evolve.


Note: 'Extinction' here includes civilizational collapse or global catastrophe, not only the literal end of all human life."


 




Comments


bottom of page