
On-Device Training Under 256KB Memory 논문에 대하여 ...

Edge AI: On-Demand Accelerating Deep Neural Network Inference via Edge Computing 논문에 대하여...

Group Normalization 논문에 대하여...

TinyTL 논문에 관하여...

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models 논문에 대하여...

SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models 논문에 대하여...

A Survey of Quantization Methods for Efficient Neural Network Inference 논문에 대하여...