aimet quantize code 분석(2)

hee0·2024년 9월 6일

Quantization

목록 보기
4/5

Linear Layer 동작을 quantization 해보자.

Matrix multiplication을 수식으로 나타내면,

Y=WXY= WX

이므로,

r=S(q+Z)r = S(q + Z)

각각 real value에 quantize 식을 대입하면,

SY(qY+ZY)=SW(qW+ZW)SX(qX+ZX)S_Y(q_Y + Z_Y) = S_W(q_W + Z_W)\cdot S_X(q_X + Z_X)
qY=SWSXSY(qWqX+ZWqX+ZXqW+ZWZX)ZYq_Y = \frac{S_WS_X}{S_Y}(q_Wq_X+Z_Wq_X+\bold {Z_Xq_W+Z_WZ_X})-Z_Y
  • ZXqW+ZWZX\bold {Z_Xq_W+Z_WZ_X} 은 미리 계산할 수 있음.
  • SWSXSY\frac{S_WS_X}{S_Y} 은 (0, 1) 사이의 값을 가지므로, 속도가 빠른 bit shift 연산으로 가능.
  • ZW=0Z_W=0이면, 항 축소가 가능. symmetric quantization 을 사용하면, 0.
    qY=SWSXSY(qWqX+ZXqW)ZYq_Y = \frac{S_WS_X}{S_Y}(q_Wq_X+\bold {Z_Xq_W})-Z_Y
## lINEAR LAYER without Bias
torch.manual_seed(0)
x = torch.rand(1, 2)
m = nn.Linear(2, 4, bias=False)
w = m.weight

y = m(x)  # torch.matmul(x, w.t())
print(y)

x_scale = x.abs().max() / 128
x_offset = 0
w_scale = w.abs().max() / 128
w_offset = 0
y_scale = y.abs().max() / 128
y_offset = 0

print(x_scale, x_offset)
print(w_scale, w_offset)
print(y_scale, y_offset)

qx = x / x_scale - x_offset
qx = torch.round(qx)
qw = w / w_scale - w_offset
qw = torch.round(qw)

SwSx_Sy = w_scale * x_scale / y_scale
qwqx = torch.matmul(qx, qw.t())
Zxqw = torch.matmul(torch.ones_like(x) * x_offset, qw.t())
Zy = y_offset

qy = SwSx_Sy * (qwqx + Zxqw) - Zy
print(qy)

dqy = (qy + y_offset) * y_scale
print(dqy)


"""output
tensor(0.0060) 0
tensor(0.0045, grad_fn=<DivBackward0>) 0
tensor(0.0054, grad_fn=<DivBackward0>) 0

tensor([[-0.6886,  0.0105,  0.4238,  0.1126]], grad_fn=<MmBackward0>)
tensor([[-0.6886,  0.0105,  0.4238,  0.1126]], grad_fn=<MmBackward0>)
tensor([[-127.9099,    2.0087,   78.5994,   20.7265]], grad_fn=<SubBackward0>)
tensor([[-0.6881,  0.0108,  0.4228,  0.1115]], grad_fn=<MulBackward0>)
"""

Linear Layer 를 Aimet-torch 의 Qaunt Sim 을 이용하여 구해보자.

from aimet_torch.v2.quantsim import QuantizationSimModel
sim_linear = QuantizationSimModel(model = nn.Sequential(m),
                           dummy_input = x,
                           quant_scheme = QuantScheme.post_training_tf_enhanced,
                           default_output_bw = 8,
                           default_param_bw = 8)

sim_linear.model[0].input_quantizers[0].symmetric =True
sim_linear.model[0].param_quantizers["weight"].symmetric =True
sim_linear.model[0].output_quantizers[0].symmetric =True
sim_linear.model[0].input_quantizers[0].signed =True
sim_linear.model[0].param_quantizers["weight"].signed =True
sim_linear.model[0].output_quantizers[0].signed =True

def foo(model, data):
    _ = model(data)
sim_linear.compute_encodings(forward_pass_callback=foo,
                           forward_pass_callback_args=x)

print(sim_linear.model[0].input_quantizers[0].get_scale())
print(sim_linear.model[0].input_quantizers[0].get_offset())
print(sim_linear.model[0].param_quantizers["weight"].get_scale())
print(sim_linear.model[0].param_quantizers["weight"].get_offset())
print(sim_linear.model[0].output_quantizers[0].get_scale())
print(sim_linear.model[0].output_quantizers[0].get_offset())
print(sim_linear.model(x))

"""output
tensor([0.0061])
tensor([0.])
tensor([0.0046])
tensor([0.])
tensor([0.0055])
tensor([0.])
DequantizedTensor([[-0.6872,  0.0109,  0.4254,  0.1145]],
                  grad_fn=<AliasBackward0>)

"""
  • Scale 이 모두 0.0001의 차이를 보임. 왜지?
  • -> encoding을 진행할 때, {forward_pass_callback}으로 제공한 데이터를 읽으면서, data의 min, max의 통계 값을 사용한다.
    이 과정에서 전체 데이터의 min, max 값을 사용하는 것이 아닌 평균값을 사용하는 듯?
    그리고 moving average 같은 dynamic 한 방법으로 계산하는 것 같음.

직접 min, max 값 지정 후 encoding

from aimet_torch.v2.quantsim import QuantizationSimModel
sim_linear = QuantizationSimModel(model = nn.Sequential(m),
                           dummy_input = x,
                           quant_scheme = QuantScheme.post_training_tf_enhanced,
                           default_output_bw = 8,
                           default_param_bw = 8)

sim_linear.model[0].input_quantizers[0].symmetric =True
sim_linear.model[0].input_quantizers[0].signed =True
sim_linear.model[0].param_quantizers["weight"].symmetric =True
sim_linear.model[0].param_quantizers["weight"].signed =True
sim_linear.model[0].output_quantizers[0].symmetric =True
sim_linear.model[0].output_quantizers[0].signed =True

x_min = -x.abs().max()
x_max = x.abs().max()
sim_linear.model[0].input_quantizers[0].min = torch.nn.Parameter(torch.ones_like(sim_linear.model[0].input_quantizers[0].min) * x_min)
sim_linear.model[0].input_quantizers[0].max = torch.nn.Parameter(torch.ones_like(sim_linear.model[0].input_quantizers[0].max) * x_max)
w_min = -w.abs().max()
w_max = w.abs().max()
sim_linear.model[0].param_quantizers["weight"].min = torch.nn.Parameter(torch.ones_like(sim_linear.model[0].param_quantizers["weight"].min) * w_min)
sim_linear.model[0].param_quantizers["weight"].max = torch.nn.Parameter(torch.ones_like(sim_linear.model[0].param_quantizers["weight"].max) * w_max)
y_min = -y.abs().max()
y_max = y.abs().max()
sim_linear.model[0].output_quantizers[0].min = torch.nn.Parameter(torch.ones_like(sim_linear.model[0].output_quantizers[0].min) * y_min)
sim_linear.model[0].output_quantizers[0].max = torch.nn.Parameter(torch.ones_like(sim_linear.model[0].output_quantizers[0].max) * y_max)

def foo(model, data):
    pass
sim_linear.compute_encodings(forward_pass_callback=foo,
                           forward_pass_callback_args=x)

print(sim_linear.model[0].input_quantizers[0].get_scale())
print(sim_linear.model[0].input_quantizers[0].get_offset())
print(sim_linear.model[0].param_quantizers["weight"].get_scale())
print(sim_linear.model[0].param_quantizers["weight"].get_offset())
print(sim_linear.model[0].output_quantizers[0].get_scale())
print(sim_linear.model[0].output_quantizers[0].get_offset())
print(sim_linear.model(x))

"""output
tensor([0.0060], grad_fn=<DivBackward0>)
tensor([0.])
tensor([0.0046], grad_fn=<DivBackward0>)
tensor([0.])
tensor([0.0054], grad_fn=<DivBackward0>)
tensor([0.])
DequantizedTensor([[-0.6859,  0.0108,  0.4213,  0.1134]],
                  grad_fn=<AliasBackward0>)
"""
  • x, y 는 직접 계산 한 것과 같은 값을 나옴.
  • weight 는 여전히 0.0001의 차이.
  • weight 는 고정 값으로 dynamic 하게 할 필요는 없을 것 같은데..

Symmetric, Signed 가 다른 경우.

## symmetric, signed  조건이 다르면 어떻게 되지 ? 
from aimet_torch.v2.quantsim import QuantizationSimModel
sim_linear = QuantizationSimModel(model = nn.Sequential(m),
                           dummy_input = x,
                           quant_scheme = QuantScheme.post_training_tf_enhanced,
                           default_output_bw = 8,
                           default_param_bw = 8)
                           
# print(sim_linear.model[0].input_quantizers[0].symmetric) # False
# print(sim_linear.model[0].param_quantizers["weight"].symmetric) # True
# print(sim_linear.model[0].output_quantizers[0].symmetric) # False
# print(sim_linear.model[0].input_quantizers[0].signed) # False
# print(sim_linear.model[0].param_quantizers["weight"].signed) # True
# print(sim_linear.model[0].output_quantizers[0].signed) # False

def foo(model, data):
    _ = model(data)
sim_linear.compute_encodings(forward_pass_callback=foo,
                           forward_pass_callback_args=x)

print(sim_linear.model[0].input_quantizers[0].get_scale(), sim_linear.model[0].input_quantizers[0].get_offset())
print(sim_linear.model[0].param_quantizers["weight"].get_scale(), sim_linear.model[0].param_quantizers["weight"].get_offset())
print(sim_linear.model[0].output_quantizers[0].get_scale(), sim_linear.model[0].output_quantizers[0].get_offset())
print(sim_linear.model(x))

"""output
tensor([0.0030]) tensor([-0.])
tensor([0.0046]) tensor([0.])
tensor([0.0044]) tensor([-158.])
DequantizedTensor([[-0.6901,  0.0087,  0.4236,  0.1136]],
                  grad_fn=<AliasBackward0>)
"""

위 동작을 직접 구현을 해보면,

## lINEAR LAYER without Bias
## symmetric, signed  조건이 다르면 어떻게 되지 ? 
torch.manual_seed(0)
x = torch.rand(1, 2)
m = nn.Linear(2, 4, bias=False)
w = m.weight

y = m(x)  # torch.matmul(x, w.t())
print(y)

x_min = min(x.min(), 0)
x_scale = (x.max() - x_min) / 255
x_offset = -0 + x_min / x_scale
w_scale = w.abs().max() / 128
w_offset = 0
y_min = min(y.min(), 0)
y_scale = (y.max() - y_min) / 255
y_offset = -0 + y_min / y_scale

qx = x / x_scale - x_offset
qx = torch.round(qx)
qw = w / w_scale - w_offset
qw = torch.round(qw)

SwSx_Sy = w_scale * x_scale / y_scale
qwqx = torch.matmul(qx, qw.t())
Zxqw = torch.matmul(torch.ones_like(x) * x_offset, qw.t())
Zy = y_offset

qy = SwSx_Sy * (qwqx + Zxqw) - Zy
print(qy)

dqy = (qy + y_offset) * y_scale
print(dqy)

print(x_scale, x_offset)
print(w_scale, w_offset)
print(y_scale, y_offset)

"""output
tensor([[-0.6886,  0.0105,  0.4238,  0.1126]], grad_fn=<MmBackward0>)
tensor([[2.5246e-01, 1.6040e+02, 2.5479e+02, 1.8343e+02]],
       grad_fn=<SubBackward0>)
tensor([[-0.6875,  0.0111,  0.4228,  0.1116]], grad_fn=<MulBackward0>)
tensor(0.0030) tensor(0.)
tensor(0.0045, grad_fn=<DivBackward0>) 0
tensor(0.0044, grad_fn=<DivBackward0>) tensor(-157.8554, grad_fn=<AddBackward0>)
"""
  • aimet 에서 real value의 min을 구할때, 0보다 클 경우, 0을 사용하는 것을 알 수 있다.
  • x_min = min(x.min(), 0)

Linear Layer 의 동작.

bias를 고려한 linear layer의 동작을 알아보자.

Y=WX+bY=WX +b\\
SY(qY+ZY)=SW(qW+ZW)SX(qX+ZX)+Sb(qb+Zb)S_Y(q_Y + Z_Y) = S_W(q_W + Z_W)\cdot S_X(q_X + Z_X) +S_b(q_b+Z_b)
Zw=0\downarrow Z_w = 0
SY(qY+ZY)=SWSX(qWqX+ZXqW)+Sb(qb+Zb)S_Y(q_Y + Z_Y) = S_WS_X(q_Wq_X + Z_Xq_W) +S_b(q_b+Z_b)
Zb=0,Sb=SWSX\downarrow Z_b = 0, S_b=S_WS_X
SY(qY+ZY)=SWSX(qWqX+ZXqW+qb)S_Y(q_Y + Z_Y) = S_WS_X(q_Wq_X + Z_Xq_W + q_b)
\downarrow
qY=SWSXSY(qWqX+qb+ZXqW)ZYq_Y = \frac{S_WS_X}{S_Y}(q_Wq_X+\bold{q_b+Z_Xq_W})-Z_Y
  • qb+ZXqW\bold{q_b+Z_Xq_W}는 미리 계산.
  • Sb=SWSX\bold{S_b = S_WS_X}으로 설정.
  • ZW=Zb=0\bold{Z_W=Z_b = 0}으로 설정.
## lINEAR LAYER with Bias
torch.manual_seed(0)
x = torch.rand(1, 2)
m = nn.Linear(2, 4, bias=True)
w = m.weight
b = m.bias

y = m(x)  # torch.matmul(x, w.t()) + b
print(y)

qmin = 0
qmax = 255
x_min = min(x.min(), 0)
x_scale = (x.max() - x_min) / 255
x_offset = torch.round(-0 + x_min / x_scale)
w_scale = w.abs().max() / 128
w_offset = 0
y_min = min(y.min(), 0)
y_scale = (y.max() - y_min) / 255
y_offset = torch.round(-0 + y_min / y_scale)
b_scale = x_scale * w_scale
b_offset = 0

qx = x / x_scale - x_offset
qx = torch.round(qx)
qw = w / w_scale - w_offset
qw = torch.round(qw)
qb = b / b_scale - b_offset
qb = torch.round(qb)

SwSx_Sy = w_scale * x_scale / y_scale
qwqx = torch.matmul(qx, qw.t())
Zxqw = torch.matmul(torch.ones_like(x) * x_offset, qw.t())
Zy = y_offset

qy = SwSx_Sy * (qwqx + qb + Zxqw) - Zy
qy = torch.round(qy)
print(qy)

dqy = (qy + y_offset) * y_scale
print(dqy)

print(x_scale, x_offset)
print(w_scale, w_offset)
print(b_scale, b_offset)
print(y_scale, y_offset)

"""output
tensor([[-0.9023, -0.1285, -0.2518, -0.3557]], grad_fn=<AddmmBackward0>)
tensor([[  0., 255., 214., 179.]], grad_fn=<RoundBackward0>)
tensor([[-0.9013, -0.1274, -0.2519, -0.3581]], grad_fn=<MulBackward0>)
tensor(0.0030) tensor(0.)
tensor(0.0045, grad_fn=<DivBackward0>) 0
tensor(1.3698e-05, grad_fn=<MulBackward0>) 0
tensor(0.0030, grad_fn=<DivBackward0>) tensor(-297., grad_fn=<RoundBackward0>)
"""
  • bias 의 scale, offset을 원하는 값으로 설정하여, 많은 항을 줄일 수 있음.
## lINEAR LAYER with Bias
from aimet_torch.v2.quantsim import QuantizationSimModel
sim_linear = QuantizationSimModel(model = nn.Sequential(m),
                           dummy_input = x,
                           quant_scheme = QuantScheme.post_training_tf_enhanced,
                           default_output_bw = 8,
                           default_param_bw = 8)

sim_linear.model[0].param_quantizers["bias"] = Q.affine.QuantizeDequantize((1,),
                                                                          bitwidth = 8,
                                                                          symmetric = True)
def foo(model, data):
    _ = model(data)
sim_linear.compute_encodings(forward_pass_callback=foo,
                           forward_pass_callback_args=x)
print(y)
print(dqy)
print(sim_linear.model(x))

print(sim_linear.model[0].input_quantizers[0].get_scale(), sim_linear.model[0].input_quantizers[0].get_offset())
print(sim_linear.model[0].param_quantizers["weight"].get_scale(), sim_linear.model[0].param_quantizers["weight"].get_offset())
print(sim_linear.model[0].param_quantizers["bias"].get_scale(), sim_linear.model[0].param_quantizers["bias"].get_offset())
print(sim_linear.model[0].output_quantizers[0].get_scale(), sim_linear.model[0].output_quantizers[0].get_offset())
"""output
tensor([[-0.9023, -0.1285, -0.2518, -0.3557]], grad_fn=<AddmmBackward0>)
tensor([[-0.9013, -0.1274, -0.2519, -0.3581]], grad_fn=<MulBackward0>)
DequantizedTensor([[-0.8996, -0.1270, -0.2505, -0.3563]],
                  grad_fn=<AliasBackward0>)
tensor([0.0030]) tensor([-0.])
tensor([0.0046]) tensor([0.])
tensor([0.0053], grad_fn=<DivBackward0>) tensor([0.])
tensor([0.0035]) tensor([-255.])
"""
  • aimet-torch 는 bias의 scale, offset 도 계산하는 듯 하다.
profile
개발 0부

0개의 댓글