12 min read

OpenAI's K-12 Play: When a $207 Billion Funding Gap Meets Your Child's Classroom

OpenAI isn't evil. But their incentives might be broken. And that's a problem when they're aggressively expanding into schools while their own safety partners raise alarms.

AI EdTech Student Safety

HSBC Global Research released an analysis of OpenAI this week that deserves attention from anyone working in education technology. The numbers tell a story that should make us think carefully about what's happening in K-12 classrooms right now.

Here's the situation: OpenAI faces a projected $207 billion funding gap by 2030. According to HSBC analysts, their compute and infrastructure costs will reach $792 billion between late 2025 and 2030. Their projected revenues, while growing rapidly to over $213 billion by 2030, simply won't be enough to close that gap.

To understand the scale of growth OpenAI needs, consider this: they currently have about 800 million weekly active users. That's remarkable growth. But the financial pressure they're under suggests they need to reach billions more.

So where do those users come from?

The K-12 Strategy

OpenAI's answer is becoming clear. In November 2025, they launched ChatGPT for Teachers: free for every K-12 educator in the United States through June 2027.

This isn't a small pilot program. They're already in districts representing nearly 150,000 teachers and staff, including Fairfax County Public Schools, Houston ISD, and Fulton County Schools. They've partnered with the American Federation of Teachers through a $23 million education initiative to reach 400,000 more educators.

The Numbers

  • 150,000+ teachers in initial district partnerships
  • 400,000 additional educators through AFT partnership
  • Free through June 2027 for all verified U.S. K-12 educators
  • $23 million invested in the National Academy for AI Instruction

From a business perspective, this is a smart strategy. Teachers are influential gatekeepers who can normalize AI use for millions of students. Get teachers comfortable with ChatGPT, and you've planted seeds for the next generation of users.

But there's something else worth considering.

The Transparency Problem

OpenAI isn't publicly traded. There are no quarterly SEC filings. No shareholder meetings where someone can ask uncomfortable questions about child safety metrics. No regulatory disclosure requirements that public companies face.

They're accountable to private investors, primarily Microsoft and SoftBank, who need to see growth that justifies their massive bets. Microsoft holds approximately 28% of OpenAI after investing over $13 billion. SoftBank led a $40 billion funding round in April 2025, committing $30 billion of that themselves.

These investors aren't patient capital waiting for long-term returns. SoftBank has reportedly demanded that OpenAI complete its restructuring to a for-profit company by the end of 2025, with financial consequences if they miss that deadline.

That's a different kind of pressure. And it comes with a lot less transparency than we'd have with a public company entering America's classrooms.

What OpenAI's Own Safety Partner Is Saying

This is where it gets uncomfortable.

On November 20, 2025, just one day after OpenAI announced ChatGPT for Teachers, Common Sense Media released a comprehensive report finding that AI chatbots are "fundamentally unsafe" for teen mental health support.

Common Sense Media is OpenAI's own safety partner. They're not an adversarial watchdog. They're the organization OpenAI chose to work with on responsible AI deployment.

And here's what they found:

Key Findings from Common Sense Media

Research conducted alongside Stanford Medicine's Brainstorm Lab found that despite recent improvements in handling explicit suicide and self-harm content, leading AI platforms consistently fail to recognize and appropriately respond to mental health conditions that affect young people.

  • Chatbots "get distracted, minimize risk, and often reinforce harmful beliefs"
  • They act as "fawning listeners, more interested in keeping a user on the platform than in directing them to actual professionals"
  • Systematic failures were found across anxiety, depression, ADHD, eating disorders, mania, and psychosis
  • At least four teen and two young adult deaths have been linked to AI mental health conversations

72% of teens have used AI companions at least once, often for emotional and mental health support.

The report's senior director put it directly: "It's not safe for kids to use AI for mental health support."

To be fair, the report noted that Claude (Anthropic's model) performed better than competitors at picking up subtle cues about deeper problems. But researchers concluded that no general-use chatbot is a safe place for teens to discuss or seek care for their mental health, given their lack of reliability and tendency toward sycophancy.

The Safety Team Problem

Here's something else that keeps nagging at me.

In May 2024, Jan Leike, OpenAI's head of alignment and superalignment lead, resigned. In a public statement on X, he explained why:

"Over the past years, safety culture and processes have taken a backseat to shiny products."

Leike said his team had been "under-resourced" and "sailing against the wind," struggling to get compute resources for safety research while the company prioritized product development.

Shortly after his departure, OpenAI dissolved its Superalignment team entirely. This was the group specifically tasked with addressing long-term AI safety risks. The team had been established just one year earlier with a commitment to dedicate 20% of OpenAI's computing power to the initiative.

One year later, it was gone. Team members were reassigned. The explicit focus on long-term safety was dissolved.

Connecting the Dots

Let me be clear about something: I don't think OpenAI is evil.

Most people working there probably genuinely believe they're building technology that will help humanity. Many of them are likely frustrated by the same pressures I'm describing. Individual employees aren't the problem.

The problem is incentives.

When you have:

  • A massive private company under unprecedented financial pressure
  • A $207 billion funding gap that requires explosive user growth
  • K-12 education as one of the largest untapped markets for new users
  • Key safety personnel walking out the door with public warnings
  • Your own safety partners releasing reports about fundamental risks to young people

...the structure of incentives matters more than individual intentions.

The Incentive Problem

When your survival depends on hockey-stick growth and schools represent the biggest untapped market, safety becomes a speed bump.

That's not a conspiracy theory. It's just business logic.

What This Means for Education

I work in educational technology because I believe technology can genuinely serve learning. I've written extensively about how AI should augment teachers, not replace them. I'm not anti-AI. I'm not anti-OpenAI.

But I think we should be asking harder questions.

Who is positioned to protect students here?

Districts accepting free ChatGPT access are making decisions that affect millions of children. Those decisions are being made based on OpenAI's assurances about safety and appropriate use. But OpenAI is accountable to private investors who need growth, not to students, parents, or educators.

ChatGPT for Teachers includes privacy protections: data isn't used to train models by default, encryption is in place, MFA and SSO are supported. These are good baseline measures. But they don't address the fundamental concerns that Common Sense Media and OpenAI's own former safety leadership have raised.

What happens after June 2027?

OpenAI says their "goal is to keep ChatGPT for Teachers affordable for educators" after the free period ends. That's not a commitment. It's aspirational language.

Once hundreds of thousands of teachers have integrated ChatGPT into their workflows, once students have normalized AI assistance, once districts have built processes around these tools, what leverage do schools have?

This is the classic platform play: subsidize adoption, create dependency, then adjust pricing. It works in consumer tech. In education, the consequences are different.

Are we building the right habits?

72% of teens have already used AI companions at least once. Many are using them for emotional support, for help with anxiety and depression, for conversations they're not having with humans.

Teachers using ChatGPT in classrooms normalizes this behavior. It signals to students that AI is a trustworthy source of support and guidance. Maybe that's fine. Maybe the technology will mature to deserve that trust.

But right now, OpenAI's own safety partners are saying it doesn't.

What I Think We Should Do

I'm not calling for a ban on AI in schools. That ship has sailed, and frankly, AI tools can genuinely help with many educational tasks. The issue isn't whether AI should be in classrooms. It's how we govern its presence there.

1. Demand transparency about safety metrics

Districts should require specific, auditable data about how ChatGPT handles student mental health conversations, crisis situations, and inappropriate content. Not marketing materials. Actual data, with third-party verification.

2. Create independent oversight

Schools shouldn't rely on OpenAI to police OpenAI. Districts need independent evaluation of AI tools before deployment, with ongoing monitoring that isn't funded or influenced by the vendors being monitored.

3. Question the "free" proposition

When a company with a $207 billion funding gap offers something free, what are they actually getting? User data? Market positioning? Normalization that leads to future revenue? Free doesn't mean no cost. It means the cost isn't visible yet.

4. Build teacher capacity for critical evaluation

Teachers need training not just on how to use AI tools, but on how to evaluate their limitations and risks. The AFT partnership includes curriculum resources, but those resources are developed in collaboration with OpenAI. Independent critical perspectives should be part of teacher preparation.

5. Keep humans in the loop for sensitive conversations

AI can help with lesson planning, grading, and administrative tasks. It should not be a first line of support for student mental health, emotional needs, or crisis situations. Period. Those conversations need humans.

The Bottom Line

OpenAI's push into K-12 education is happening at the intersection of genuine technological potential and serious structural pressures. The company faces financial challenges that require massive user growth. Schools represent an enormous market. The timing of "free for teachers" isn't coincidental.

None of this makes OpenAI villainous. It makes them a company responding to incentives. The problem is that those incentives may not align with what's best for students.

When your safety partners are releasing reports about fundamental risks to young people, when your head of alignment resigns saying safety has "taken a backseat to shiny products," when your company has $207 billion reasons to prioritize growth, maybe we shouldn't assume that company is the right guardian for what happens in classrooms.

I don't know who should be protecting students in this moment. I'm not sure our institutions have caught up to the challenge. But I'm pretty confident it's not the company with a $207 billion funding gap and a growth imperative that points directly at America's children.

That's not evil. It's just economics.

And economics, left unchecked, doesn't care about kids.


Sources

Braden Riggins

Braden Riggins, MBA

Instructional Designer & Solution Architect who believes technology should serve education, not the other way around. Building learning experiences that actually work.

This content has been edited for grammar and style using AI.