AI Weekly:Meta's $14.8B Bet, EU Rise & Apple Doubt

The artificial intelligence landscape witnessed unprecedented developments this week, with industry giants making bold strategic moves while researchers exposed fundamental limitations in current AI systems. From Meta's massive investment to Europe's AI breakthrough with Mistral and Apple's controversial research findings, these developments signal both the immense potential and current constraints of AI technology.

Meta's $14.8 Billion Bet on Superintelligence

Meta Platforms finalised its largest AI investment to date, acquiring a 49% stake in Scale AI for approximately $14.8 billion. This strategic move brings Scale AI's 28-year-old founder Alexandr Wang into Meta to lead a new "Superintelligence" initiative, marking CEO Mark Zuckerberg's most ambitious push yet in the AI race.

What makes this deal significant:

  • Strategic structure: The 49% non-voting stake avoids antitrust review while giving Meta access to Scale AI's data labelling infrastructure

  • Talent acquisition: Wang joins Meta's elite 50-person superintelligence team, operating near Zuckerberg's office at Menlo Park headquarters

  • Competitive response: This follows disappointments with Meta's Llama 4 model and delays in the "Behemoth" model, pushing Zuckerberg to take direct control

The deal positions Meta to compete more aggressively with OpenAI, Google, and Anthropic, backed by Meta's planned $60-65 billion AI infrastructure investment for 2025. However, the partnership raises concerns among competitors, with Google reportedly cutting ties with Scale AI due to Meta's involvement.

Europe's AI Sovereignty Push: Mistral's Reasoning Breakthrough

French AI startup Mistral launched Europe's first reasoning model this week, introducing "Magistral" to compete directly with OpenAI's o3 and DeepSeek's R1. This represents a crucial milestone for European AI independence, backed by French President Emmanuel Macron.

Magistral's key differentiators:

  • Multilingual reasoning: Unlike competitors that primarily reason in English or Chinese, Magistral excels in European languages, including French, Spanish, German, and Arabic

  • Transparent logic: Features traceable reasoning chains for compliance-heavy industries like law, finance, and healthcare

  • Open-source availability: Magistral Small (24B parameters) is freely available under the Apache 2.0 license, while the Medium version targets enterprise users

French AI startup Mistral launched their first reasoning model this week called "Magistral"

Performance benchmarks show Magistral Medium achieving 73.6% on AIME2024 tasks, rising to 90% with majority voting. Mistral also announced "Mistral Compute," an AI infrastructure platform partnering with Nvidia to provide European alternatives to AWS and Azure.

Apple's Reality Check: The "Illusion of Thinking" Under Fire

Apple researchers published controversial findings about AI reasoning capabilities in a study titled "The Illusion of Thinking". Testing models including OpenAI's o3, DeepSeek R1, and Claude 3.5 Sonnet on controlled puzzles, Apple discovered what they claimed were fundamental limitations that challenge current AI reasoning claims.

Apple's key research findings:

  • Complete failure beyond complexity thresholds: Models experienced total accuracy collapse on complex problems

  • Counterintuitive scaling: Reasoning effort increased with complexity, then declined, despite adequate computational resources

  • Three performance regimes: Low-complexity tasks where standard models outperformed reasoning models, medium-complexity tasks where reasoning helped, and high-complexity tasks where both failed

Expert Pushback and Methodological Concerns

However, AI experts are heavily disputing Apple's conclusions, suggesting the company may be biased due to their own AI struggles. Critics argue that Apple's research contains significant methodological flaws that undermine its credibility.

Key criticisms include:

  • Misunderstanding cumulative error: Experts argue that Apple mischaracterises normal probabilistic behaviour as a fatal flaw, noting that even with 99.99% per-token accuracy, complex tasks naturally face cumulative error challenges, a reality that applies to humans as well

  • Penalising intelligent abstraction: When AI models shift to describing algorithms rather than executing every step on massive problems, Apple labelled this as failure, but critics see this as sophisticated problem-solving behaviour

  • Rigged evaluation criteria: The study's rules appeared designed to make AI systems fail, rather than assess their actual capabilities in realistic scenarios

Some industry observers suggest Apple's harsh critique may stem from their own AI development challenges, including their struggling Apple Intelligence platform and difficulties competing with more advanced AI systems.

With 73% of Apple Intelligence users reporting little value from current features, the company's position as an AI critic raises questions about potential bias in their research methodology.

Apple's new "Liquid Glass" interface design is receiving harsh criticism from early testers

Apple's Design Struggles Continue

Adding to Apple's AI challenges, the company's new "Liquid Glass" interface design for iOS 26 and macOS 26 is receiving harsh criticism from early testers. The transparency-heavy design creates readability issues and interface chaos, particularly in the iOS 26 control centre, highlighting Apple's broader struggles to match competitors' AI innovation pace.

What This Means for AI's Future

This week's developments illustrate AI's current paradox of massive investments and breakthrough announcements alongside disputed research about fundamental limitations. While Meta bets billions on superintelligence and Europe asserts AI sovereignty through Mistral, the controversy surrounding Apple's research highlights the ongoing debate about AI capabilities versus limitations.

For businesses evaluating AI integration, these developments underscore the importance of realistic expectations while recognising that research methodology and potential bias can significantly influence conclusions. The gap between AI marketing promises and actual capabilities remains a subject of intense debate, making careful evaluation crucial for successful implementation.

🤔 Want to navigate these rapidly evolving AI trends and emerging controversies with confidence?

Join FutureCraft AI's Early Access program to experience how truly brand-aligned AI can deliver superior results while cutting through the noise of conflicting research and industry disputes.

🤝🏼 Sign up for our FREE early access program now!

Join our waitlist

Be among the first to experience FutureCraft AI. Join the waitlist today for early access updates.