Pytorch to CoreML

Eunseo Jeong·2022년 9월 28일
1

on Device AI

목록 보기
2/2
post-thumbnail

Pytorch

  • python에서 사용가능한 ML framework

CoreML

Why?

  • Swift를 이용하여 ML이 적용된 iOS app을 만들고자 할 때, Swift에서 import 가능한 framework인 CoreML을 이용해 모델을 생성, 학습, 추론을 진행할 수 있다.
  • TensorFlow framework로 학습되어진 모델을 iOS에 적용하기 위해서는 CoreML framework로 변환을 해야한다 (TensorFlow를 PyTorch로 conversion하는 것과 같은 맥락)

How?

In Python Code (TensorFlow)

  1. Load the model
import torch
import torchvision

# Load a pre-trained version of MobileNetV2
torch_model = torchvision.models.mobilenet_v2(pretrained=True)
# Set the model in evaluation mode.
torch_model.eval()

# Trace the model with random data.
example_input = torch.rand(1, 3, 224, 224) 
traced_model = torch.jit.trace(torch_model, example_input)

# 여기서 sample input과 sample output에 대한 세팅이 진행된다
out = traced_model(example_input)
  1. Convert the model using convert()
# Using image_input in the inputs parameter:
# Convert to Core ML using the Unified Conversion API.
model = ct.convert(
    traced_model,
    inputs=[ct.TensorType(shape=example_input.shape)]
 )
  1. Save the ml Model
# Save the converted model.
model.save("mobilenet.mlmodel")

In Swift Code (CoreML)

  • 앞서 변형된 mlmodel을 다음과 같이 project에 load할 수 있음

  1. load mlmodel
guard let model = try? VNCoreMLModel(for: FaceParsing().model) else {
    fatalError("Loading CoreML Model Failed.")
}
  1. inference with handler
let handler : VNImageRequestHandler = VNImageRequestHandler(ciImage: inputImg as! CIImage)
        
do{
    try! handler.perform([request])
}catch{
    print("error")
}
  1. process images with inferenced results
let request = VNCoreMLRequest(model: model) {
    request, error in
    guard let results = request.results as? [VNCoreMLFeatureValueObservation],
            let segmentationmap = results.first?.featureValue.multiArrayValue,
            let row = segmentationmap.shape[0] as? Int,
            let col = segmentationmap.shape[1] as? Int else {
        fatalError("Model failed to process images.")
    }
    
    self.model_results = results
    self.model_segmentationmap = segmentationmap
}

실제 코드

guard let model = try? VNCoreMLModel(for: FaceParsing().model) else {
            fatalError("Loading CoreML Model Failed.")
        }
        
let request = VNCoreMLRequest(model: model) {
    request, error in
    guard let results = request.results as? [VNCoreMLFeatureValueObservation],
            let segmentationmap = results.first?.featureValue.multiArrayValue,
            let row = segmentationmap.shape[0] as? Int,
            let col = segmentationmap.shape[1] as? Int else {
        fatalError("Model failed to process images.")
    }
    
    self.model_results = results
    self.model_segmentationmap = segmentationmap
}

let handler : VNImageRequestHandler = VNImageRequestHandler(ciImage: inputImg as! CIImage)

do{
    try! handler.perform([request])
}catch{
    print("error")
}

Ref)
https://coremltools.readme.io/docs/what-are-coreml-tools

profile
ML & iOS 공부하는 학생입니다

0개의 댓글