Multiverse Computing Launches HyperNova 60B 2602: 50% Compressed LLM

Multiverse Computing Launches HyperNova 60B 2602: Compressed LLM with Performance Gains

Multiverse Computing, the Spanish AI compression leader, announced the release of HyperNova 60B 2602, a 50% compressed version of OpenAI’s gpt-oss-120B, now freely available on Hugging Face.

CompactifAI: Quantum-Inspired Technology

The model utilizes Multiverse’s proprietary CompactifAI technology, which applies advanced tensor network techniques with quantum-inspired mathematics. This approach enables significant memory reduction while maintaining performance.

Technical Specifications

  • Compression: 50% reduction compared to gpt-oss-120B
  • Memory: Reduced from 61GB to 32GB
  • Benchmarks: Near-parity maintained in tool-calling performance
  • Tau2-Bench: 5x improvement in performance
  • Terminal Bench Hard: 2x performance increase
  • Availability: Free on Hugging Face

Relevance for European Sovereign AI

The launch comes at a critical juncture, with European policymakers prioritizing sovereign AI and addressing infrastructure limitations. Multiverse’s CompactifAI technology offers a pathway to reduce both computational costs and the carbon footprint of large language models.

Compression Capabilities

According to Multiverse, CompactifAI technology can achieve up to 95% size reduction with minimal accuracy loss, preserving the most vital components of neural networks. The HyperNova 60B 2602 demonstrates that compression is an iterative improvement process, not just a one-time optimization.

Availability

The model is now freely available on Hugging Face, making it accessible to developers and companies interested in efficient, low-cost AI.

Sources


This post was generated by AI using GLM-4.7

Translations: