TensorFlow to CoreML

Eunseo Jeong·2022년 9월 28일
1

on Device AI

목록 보기
1/2
post-thumbnail

TensorFlow

  • python에서 사용가능한 ML framework

CoreML

Why?

  • Swift를 이용하여 ML이 적용된 iOS app을 만들고자 할 때, Swift에서 import 가능한 framework인 CoreML을 이용해 모델을 생성, 학습, 추론을 진행할 수 있다.
  • TensorFlow framework로 학습되어진 모델을 iOS에 적용하기 위해서는 CoreML framework로 변환을 해야한다 (TensorFlow를 PyTorch로 conversion하는 것과 같은 맥락)

How?

In Python Code (TensorFlow)

  1. Load the model
import coremltools as ct

# Load TensorFlow model
import tensorflow as tf # Tf 2.2.0

tf_model = tf.keras.applications.Xception(weights="imagenet", 
                                          input_shape=(299, 299, 3))
  1. Convert the model using convert()
# Convert using the same API
model_from_tf = ct.convert(tf_model)
  1. Save the ml Model
model_from_tf.save("imagenet.mlmodel")
  1. model을 load하는 다양한 방법들을 추가한 최종버전 (or 버전들도 가능하다는 의미)
import tensorflow as tf
import coremltools as ct

tf_keras_model = tf.keras.Sequential(
    [
        tf.keras.layers.Flatten(input_shape=(28, 28)),
        tf.keras.layers.Dense(128, activation=tf.nn.relu),
        tf.keras.layers.Dense(10, activation=tf.nn.softmax),
    ]
)

# Pass in `tf.keras.Model` to the Unified Conversion API
mlmodel = ct.convert(tf_keras_model)

# or save the keras model in SavedModel directory format and then convert
tf_keras_model.save('tf_keras_model')
mlmodel = ct.convert('tf_keras_model')

# or load the model from a SavedModel and then convert
tf_keras_model = tf.keras.models.load_model('tf_keras_model')
mlmodel = ct.convert(tf_keras_model)

# or save the keras model in HDF5 format and then convert
tf_keras_model.save('tf_keras_model.h5')
mlmodel = ct.convert('tf_keras_model.h5')

# save converted model
mlmodel.save("trainedmodel.mlmodel")

In Swift Code (CoreML)

  • 앞서 변형된 mlmodel을 다음과 같이 project에 load할 수 있음

  1. load mlmodel
guard let model = try? VNCoreMLModel(for: FaceParsing().model) else {
    fatalError("Loading CoreML Model Failed.")
}
  1. inference with handler
let handler : VNImageRequestHandler = VNImageRequestHandler(ciImage: inputImg as! CIImage)
        
do{
    try! handler.perform([request])
}catch{
    print("error")
}
  1. process images with inferenced results
let request = VNCoreMLRequest(model: model) {
    request, error in
    guard let results = request.results as? [VNCoreMLFeatureValueObservation],
            let segmentationmap = results.first?.featureValue.multiArrayValue,
            let row = segmentationmap.shape[0] as? Int,
            let col = segmentationmap.shape[1] as? Int else {
        fatalError("Model failed to process images.")
    }
    
    self.model_results = results
    self.model_segmentationmap = segmentationmap
}

실제 코드

guard let model = try? VNCoreMLModel(for: FaceParsing().model) else {
            fatalError("Loading CoreML Model Failed.")
        }
        
let request = VNCoreMLRequest(model: model) {
    request, error in
    guard let results = request.results as? [VNCoreMLFeatureValueObservation],
            let segmentationmap = results.first?.featureValue.multiArrayValue,
            let row = segmentationmap.shape[0] as? Int,
            let col = segmentationmap.shape[1] as? Int else {
        fatalError("Model failed to process images.")
    }
    
    self.model_results = results
    self.model_segmentationmap = segmentationmap
}

let handler : VNImageRequestHandler = VNImageRequestHandler(ciImage: inputImg as! CIImage)

do{
    try! handler.perform([request])
}catch{
    print("error")
}

Ref)
https://pilgwon.github.io/blog/2017/09/18/Smart-Gesture-Recognition-CoreML-TensorFlow.html
https://medium.com/@JMangia/swift-loves-tensorflow-and-coreml-2a11da25d44

profile
ML & iOS 공부하는 학생입니다

0개의 댓글