
An SNU DLLab Spinoff. We eliminate the 'Software Tax' of high-level Python frameworks. Experience Nova Engine: the ultimate layer that unlocks the hidden potential of modern GPUs/SoCs.
C++/CUTLASS kernel optimization engine. Delivers inference performance near physical hardware limits, maximizing cloud infrastructure efficiency.
A reference implementation of Nova Engine. A high-performance autonomous agent generating immediate revenue in the global K-Beauty market.

Spin-off from Seoul National University Deep Learning Lab (DLLab) · Visit DLLab
Spin-off from Seoul National University Deep Learning Lab (DLLab), bridging academic rigor with production engineering.
A global R&D team of systems engineers and applied researchers—focused on inference performance and real-world deployment.

From Kernel Wins to a Global Inference Engine
Kernel-Level Optimization for Peak Inference
We deliver real performance gains by engineering kernels (e.g., Tensor Core programs) and optimizing inference bottlenecks for modern GPU/SoC systems.
We provide a benchmark-driven validation process to quantify throughput/latency, and build reproducible results with pilot partners.
Hardware-Agnostic Inference Acceleration Stack
Productizing our optimization into Nova Engine: a portable layer that maximizes inference performance across heterogeneous compute.
Hardware Co-design as an Option, Not a Dependency
Once software performance is proven and scalable, we may pursue selective hardware co-design/fabless—without compromising our hardware-agnostic strategy.
Tell us your workload. We’ll propose an optimization plan and a path to production with Nova Engine.
Contact