Ai Engineering 3 min read

AI Chip Startup Rebellions Raises $400M for Rebel100

South Korean startup Rebellions hits a $2.3B valuation in a pre-IPO round to scale its Rebel100 AI accelerator and compete with industry leaders.

South Korean AI chipmaker Rebellions secured a $400 million pre-IPO funding round to scale mass production of its Rebel100 hardware platform. The funding values the company at $2.34 billion, a substantial increase from its $1.4 billion Series C in late 2025. If you manage large-scale AI inference clusters, the introduction of a high-efficiency alternative to Nvidia’s hardware alters your procurement roadmap for the coming year.

Hardware Architecture and Specifications

The Rebel100 is a data center-scale AI accelerator built on a four-homogeneous-chiplet system-on-chip design. It utilizes the UCIe-Advanced interconnect specification to link the chiplets. The silicon is manufactured on Samsung’s 4nm process node.

This architecture includes 144GB of HBM3E memory. The memory configuration provides an aggregate bandwidth of 4.8 TB/s.

Rebellions designed the chip specifically for heavy inference workloads. The hardware supports 2,048 TFLOPS of compute at FP8 precision and 1,024 TFLOPS at FP16 precision.

Performance and Power Draw

The Rebel100 operates within a 350W to 600W thermal design power envelope. Rebellions claims the accelerator matches the performance of the Nvidia H200 while reducing power consumption by up to 50 percent.

MetricRebel100 Specification
Memory Capacity144GB HBM3E
Memory Bandwidth4.8 TB/s
FP8 Compute2,048 TFLOPS
FP16 Compute1,024 TFLOPS
Power Consumption350W to 600W TDP

For specific workload benchmarks, the company reported a throughput of 56.8 tokens per second on Llama 3.3 70B. This test used FP8 precision in a tensor parallel 2 configuration, processing 2,000 input tokens and generating 2,000 output tokens. Consistent throughput at this power level is critical when streaming LLM responses in high-concurrency production environments.

Supply Chain Advantage

Rebellions maintains an unusual structural advantage in the memory supply chain. Following its December 2024 merger with SK Telecom spinoff Sapeon Korea, the company counts both SK Hynix and Samsung Electronics as strategic shareholders. This dual relationship secures reliable access to highly constrained high-bandwidth memory components.

CEO Sunghyun Park is directing sales efforts toward specialized AI infrastructure operators like Meta and xAI. The company is deliberately avoiding direct competition for massive hyperscaler accounts like Amazon Web Services or Microsoft Azure. Rebellions is currently conducting active proof-of-concept trials with customers in the United States.

Capital and Expansion

Mirae Asset Financial Group and the Korea National Growth Fund led the pre-IPO round, bringing total raised capital to $850 million. The government fund contributed approximately $166 million through the “K-Nvidia” initiative, a sovereign strategy to nurture domestic AI semiconductor leaders. Existing backers including Arm, KT, and Aramco’s Wa’ed Ventures also participated. The company plans to launch an initial public offering in late 2026.

Evaluate your current inference compute costs against the 350W to 600W footprint of emerging chiplet architectures. Track the Rebel100’s availability across specialized cloud providers and calculate your specific cost per token on FP8 workloads before locking into long-term H200 cluster commitments.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading