media pipe pose youtube 영상을 보고 정리한 글 입니다.
ipynb 파일입니다. youtube에서 영어로 설명을 해주셔서 그냥 주석도 영어로 달았습니다. 자막 지원이 안되어서 주석이 틀릴 수 있는 점 양해 부탁.. 그렇지만 최대한 한줄한줄 정성것 설명 달겠습니다!
순서
Install and Import Dependencies 에서 사용한 코드를 그대로 가져오시면 됩니다.
cap = cv2.VideoCapture(0)
## Setup mediapipe instance
with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5) as pose: #variable pose
# this particular line is then going to be accessible via the variable post
while cap.isOpened():
ret, frame = cap.read()
# Recolor image to RGB
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) #reordering those color arrays
image.flags.writeable = False #save a bunch of memory once we pass this to our pose estimation model
#! Make detection
results = pose.process(image) #get our detections back and store
#pose means pose model
# Recolor back to BGR
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
#what we are going to do in a sec is re-render it back using opencv and again opencv wants its image in bgr format
#! Render detections
#going on ahead and drawing our detections to our image
mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS,
mp_drawing.DrawingSpec(color=(245,117,66), thickness=2, circle_radius=2), #joint
mp_drawing.DrawingSpec(color=(245,66,230), thickness=2, circle_radius=2) #bone
)
cv2.imshow('Mediapipe Feed', image)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
mp_pose.Pose so this is actually accessing our pose estimation model
min_detection_confidence=0.5 is what we want our detection confidence to be min_tracking_confidence=0.5 is specifiying our traking confidence, maintaining our state
mp_drawing.draw_landmarks
mp_drawing.DrawingSpec
results.pose_landmarks
→ got the coordinates for each and every landmark
→ each individual point that’s represented a part of the pose estimaiton model
mp_pose.POSE_CONNECTIONS
→ which landmarks are connected to which
→ ex) nose is connected to your left eye inner ..