LLM Core AI
Background

Unlocking Peak Hardware Performancevia C++/CUTLASS Kernel Optimization

An SNU DLLab Spinoff. We eliminate the 'Software Tax' of high-level Python frameworks. Experience Nova Engine: the ultimate layer that unlocks the hidden potential of modern GPUs/SoCs.

The Dual-Engine Strategy

Hardware Background
NOVA
Nova Engine

Nova Engine (Tech Base)

C++/CUTLASS kernel optimization engine. Delivers inference performance near physical hardware limits, maximizing cloud infrastructure efficiency.

Ideal forCloud providers (GCP/AWS/Azure) and enterprises running massive AI workloads
Explore Tech Stack
AI Background
AI
Agentic Commerce

Agentic Commerce (Revenue)

A reference implementation of Nova Engine. A high-performance autonomous agent generating immediate revenue in the global K-Beauty market.

  • 01 Merchandising
  • 02 Customer Service (CS)
  • 03 Marketing Automation
Background

Company & Team

Spin-off from Seoul National University Deep Learning Lab (DLLab) · Visit DLLab

Leadership

Spin-off from Seoul National University Deep Learning Lab (DLLab), bridging academic rigor with production engineering.

Global Talent Pipeline

A global R&D team of systems engineers and applied researchers—focused on inference performance and real-world deployment.

Collaborations & Communities

Tenstorrent
DORA Community

Technology Roadmap

From Kernel Wins to a Global Inference Engine

01

Phase 1: Optimize (Now)

Kernel-Level Optimization for Peak Inference

We deliver real performance gains by engineering kernels (e.g., Tensor Core programs) and optimizing inference bottlenecks for modern GPU/SoC systems.

Proof Point

We provide a benchmark-driven validation process to quantify throughput/latency, and build reproducible results with pilot partners.

02

Phase 2: Nova Engine (Next)

Hardware-Agnostic Inference Acceleration Stack

Productizing our optimization into Nova Engine: a portable layer that maximizes inference performance across heterogeneous compute.

03

Phase 3: Co-design (Later)

Hardware Co-design as an Option, Not a Dependency

Once software performance is proven and scalable, we may pursue selective hardware co-design/fabless—without compromising our hardware-agnostic strategy.

Want measurable inference gains?

Tell us your workload. We’ll propose an optimization plan and a path to production with Nova Engine.

Contact
LLM Core AI - Inference Optimization & Agentic Commerce