Object Detection (kitti tiny)

bbkyoo·2021년 9월 30일
0

Detection

목록 보기
1/4
!pip install mmcv-full
Collecting mmcv-full
  Downloading mmcv-full-1.3.14.tar.gz (324 kB)
     |████████████████████████████████| 324 kB 8.3 MB/s 
[?25hCollecting addict
  Downloading addict-2.4.0-py3-none-any.whl (3.8 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (1.19.5)
Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (21.0)
Requirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (7.1.2)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from mmcv-full) (3.13)
Collecting yapf
  Downloading yapf-0.31.0-py2.py3-none-any.whl (185 kB)
     |████████████████████████████████| 185 kB 67.1 MB/s 
[?25hRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->mmcv-full) (2.4.7)
Building wheels for collected packages: mmcv-full
  Building wheel for mmcv-full (setup.py) ... [?25l[?25hdone
  Created wheel for mmcv-full: filename=mmcv_full-1.3.14-cp37-cp37m-linux_x86_64.whl size=31614925 sha256=204beb9bca2c4a58886e1738b1bd8e24ebcb4ee4042608d6a2d6a1789ecdf2e2
  Stored in directory: /root/.cache/pip/wheels/5e/54/62/69c99dc3c9937bca64126f81cbe315ae6c8e6e98c43fa7392d
Successfully built mmcv-full
Installing collected packages: yapf, addict, mmcv-full
Successfully installed addict-2.4.0 mmcv-full-1.3.14 yapf-0.31.0
!git clone https://github.com/open-mmlab/mmdetection.git
Cloning into 'mmdetection'...
remote: Enumerating objects: 21083, done.
remote: Total 21083 (delta 0), reused 0 (delta 0), pack-reused 21083
Receiving objects: 100% (21083/21083), 24.81 MiB | 30.57 MiB/s, done.
Resolving deltas: 100% (14743/14743), done.
%cd mmdetection
/content/mmdetection
!python setup.py install
!mkdir checkpoints
!wget -O /content/mmdetection/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
--2021-09-29 06:33:18--  https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
Resolving download.openmmlab.com (download.openmmlab.com)... 47.88.36.78
Connecting to download.openmmlab.com (download.openmmlab.com)|47.88.36.78|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 167287506 (160M) [application/octet-stream]
Saving to: ‘/content/mmdetection/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth’

/content/mmdetectio 100%[===================>] 159.54M  13.0MB/s    in 14s     

2021-09-29 06:33:33 (11.5 MB/s) - ‘/content/mmdetection/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth’ saved [167287506/167287506]
from mmdet.apis import init_detector, inference_detector, show_result_pyplot
# download, decompress the data
!wget https://download.openmmlab.com/mmdetection/data/kitti_tiny.zip
!unzip kitti_tiny.zip > /dev/null
--2021-09-29 06:03:55--  https://download.openmmlab.com/mmdetection/data/kitti_tiny.zip
Resolving download.openmmlab.com (download.openmmlab.com)... 47.88.36.78
Connecting to download.openmmlab.com (download.openmmlab.com)|47.88.36.78|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6918271 (6.6M) [application/zip]
Saving to: ‘kitti_tiny.zip’

kitti_tiny.zip      100%[===================>]   6.60M  4.15MB/s    in 1.6s    

2021-09-29 06:03:58 (4.15 MB/s) - ‘kitti_tiny.zip’ saved [6918271/6918271]
#CLASSES = ('Car', 'Van', 'Truck', 'Pedestrian', 'Person_sitting', 'Cyclist', 'Tram', 'Misc', 'DontCare')
import copy
import os.path as osp

import mmcv
import numpy as np

from mmdet.datasets.builder import DATASETS
from mmdet.datasets.custom import CustomDataset

@DATASETS.register_module(force=True)
class KittiTinyDataset(CustomDataset):

    CLASSES = ('Car', 'Pedestrian', 'Cyclist')
    
    ### self.ann_file : /content/kitti_tiny/train.txt
    ### self.img_prefix : /content/kitti_tiny/training/image_2
    ### ann_file : /content/kitti_tiny/train.txt

    def load_annotations(self, ann_file):
        print("self.ann_file : ", self.ann_file)
        print("self.img_prefix: ", self.img_prefix)

        cat2label = {k: i for i, k in enumerate(self.CLASSES)}
        # load image list from file
        image_list = mmcv.list_from_file(self.ann_file)
    
        data_infos = []
        # convert annotations to middle format
        for image_id in image_list:
            filename = f'{self.img_prefix}/{image_id}.jpeg'
            image = mmcv.imread(filename)
            height, width = image.shape[:2]
    
            data_info = dict(filename=f'{image_id}.jpeg', width=width, height=height)
    
            # load annotations
            label_prefix = self.img_prefix.replace('image_2', 'label_2')
            lines = mmcv.list_from_file(osp.join(label_prefix, f'{image_id}.txt'))
    
            content = [line.strip().split(' ') for line in lines]
            bbox_names = [x[0] for x in content]
            bboxes = [[float(info) for info in x[4:8]] for x in content]
    
            gt_bboxes = []
            gt_labels = []
            gt_bboxes_ignore = []
            gt_labels_ignore = []
    
            # filter 'DontCare'
            for bbox_name, bbox in zip(bbox_names, bboxes):
                if bbox_name in cat2label:
                    gt_labels.append(cat2label[bbox_name])
                    gt_bboxes.append(bbox)
                else:
                    gt_labels_ignore.append(-1)
                    gt_bboxes_ignore.append(bbox)

            data_anno = dict(
                bboxes=np.array(gt_bboxes, dtype=np.float32).reshape(-1, 4),
                labels=np.array(gt_labels, dtype=np.long),
                bboxes_ignore=np.array(gt_bboxes_ignore,
                                       dtype=np.float32).reshape(-1, 4),
                labels_ignore=np.array(gt_labels_ignore, dtype=np.long))

            data_info.update(ann=data_anno)
            data_infos.append(data_info)

        return data_infos
config_file = '/content/mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
checkpoint_file = '/content/mmdetection/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
!ls -la /content/mmdetection/checkpoints/
total 163376
drwxr-xr-x  2 root root      4096 Sep 29 06:33 .
drwxr-xr-x 20 root root      4096 Sep 29 06:54 ..
-rw-r--r--  1 root root 167287506 Aug 28  2020 faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
from mmcv import Config
cfg = Config.fromfile(config_file)
print(cfg.pretty_text)
from mmdet.apis import set_random_seed

# Modify dataset type and path
cfg.dataset_type = 'KittiTinyDataset'
cfg.data_root = '/content/kitti_tiny/'

cfg.data.test.type = 'KittiTinyDataset'
cfg.data.test.data_root = '/content/kitti_tiny/'
cfg.data.test.ann_file = 'train.txt'
cfg.data.test.img_prefix = 'training/image_2'

cfg.data.train.type = 'KittiTinyDataset'
cfg.data.train.data_root = '/content/kitti_tiny/'
cfg.data.train.ann_file = 'train.txt'
cfg.data.train.img_prefix = 'training/image_2'

cfg.data.val.type = 'KittiTinyDataset'
cfg.data.val.data_root = '/content/kitti_tiny/'
cfg.data.val.ann_file = 'val.txt'
cfg.data.val.img_prefix = 'training/image_2'

# modify num classes of the model in box head
cfg.model.roi_head.bbox_head.num_classes = 3
# We can still use the pre-trained Mask RCNN model though we do not need to
# use the mask branch
cfg.load_from = 'checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'

# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'

# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.optimizer.lr = 0.02 / 8
cfg.lr_config.warmup = None
cfg.log_config.interval = 10

cfg.lr_config.policy = 'step'

# Change the evaluation metric since we use customized dataset.
cfg.evaluation.metric = 'mAP'
# We can set the evaluation interval to reduce the evaluation times
cfg.evaluation.interval = 12
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 12

# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)


# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
Config:
model = dict(
    type='FasterRCNN',
    backbone=dict(
        type='ResNet',
        depth=50,
        num_stages=4,
        out_indices=(0, 1, 2, 3),
        frozen_stages=1,
        norm_cfg=dict(type='BN', requires_grad=True),
        norm_eval=True,
        style='pytorch',
        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
    neck=dict(
        type='FPN',
        in_channels=[256, 512, 1024, 2048],
        out_channels=256,
        num_outs=5),
    rpn_head=dict(
        type='RPNHead',
        in_channels=256,
        feat_channels=256,
        anchor_generator=dict(
            type='AnchorGenerator',
            scales=[8],
            ratios=[0.5, 1.0, 2.0],
            strides=[4, 8, 16, 32, 64]),
        bbox_coder=dict(
            type='DeltaXYWHBBoxCoder',
            target_means=[0.0, 0.0, 0.0, 0.0],
            target_stds=[1.0, 1.0, 1.0, 1.0]),
        loss_cls=dict(
            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
        loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
    roi_head=dict(
        type='StandardRoIHead',
        bbox_roi_extractor=dict(
            type='SingleRoIExtractor',
            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
            out_channels=256,
            featmap_strides=[4, 8, 16, 32]),
        bbox_head=dict(
            type='Shared2FCBBoxHead',
            in_channels=256,
            fc_out_channels=1024,
            roi_feat_size=7,
            num_classes=3,
            bbox_coder=dict(
                type='DeltaXYWHBBoxCoder',
                target_means=[0.0, 0.0, 0.0, 0.0],
                target_stds=[0.1, 0.1, 0.2, 0.2]),
            reg_class_agnostic=False,
            loss_cls=dict(
                type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
            loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
        train_cfg=dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.5,
                neg_iou_thr=0.5,
                min_pos_iou=0.5,
                match_low_quality=False,
                ignore_iof_thr=-1),
            sampler=dict(
                type='RandomSampler',
                num=512,
                pos_fraction=0.25,
                neg_pos_ub=-1,
                add_gt_as_proposals=True),
            pos_weight=-1,
            debug=False),
        test_cfg=dict(
            score_thr=0.05,
            nms=dict(type='nms', iou_threshold=0.5),
            max_per_img=100),
        pretrained=None),
    train_cfg=dict(
        rpn=dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.7,
                neg_iou_thr=0.3,
                min_pos_iou=0.3,
                match_low_quality=True,
                ignore_iof_thr=-1),
            sampler=dict(
                type='RandomSampler',
                num=256,
                pos_fraction=0.5,
                neg_pos_ub=-1,
                add_gt_as_proposals=False),
            allowed_border=-1,
            pos_weight=-1,
            debug=False),
        rpn_proposal=dict(
            nms_pre=2000,
            max_per_img=1000,
            nms=dict(type='nms', iou_threshold=0.7),
            min_bbox_size=0),
        rcnn=dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.5,
                neg_iou_thr=0.5,
                min_pos_iou=0.5,
                match_low_quality=False,
                ignore_iof_thr=-1),
            sampler=dict(
                type='RandomSampler',
                num=512,
                pos_fraction=0.25,
                neg_pos_ub=-1,
                add_gt_as_proposals=True),
            pos_weight=-1,
            debug=False)),
    test_cfg=dict(
        rpn=dict(
            nms_pre=1000,
            max_per_img=1000,
            nms=dict(type='nms', iou_threshold=0.7),
            min_bbox_size=0),
        rcnn=dict(
            score_thr=0.05,
            nms=dict(type='nms', iou_threshold=0.5),
            max_per_img=100)))
dataset_type = 'KittiTinyDataset'
data_root = '/content/kitti_tiny/'
img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations', with_bbox=True),
    dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
    dict(type='RandomFlip', flip_ratio=0.5),
    dict(
        type='Normalize',
        mean=[123.675, 116.28, 103.53],
        std=[58.395, 57.12, 57.375],
        to_rgb=True),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(1333, 800),
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img'])
        ])
]
data = dict(
    samples_per_gpu=2,
    workers_per_gpu=2,
    train=dict(
        type='KittiTinyDataset',
        ann_file='train.txt',
        img_prefix='training/image_2',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(type='LoadAnnotations', with_bbox=True),
            dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
            dict(type='RandomFlip', flip_ratio=0.5),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=32),
            dict(type='DefaultFormatBundle'),
            dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
        ],
        data_root='/content/kitti_tiny/'),
    val=dict(
        type='KittiTinyDataset',
        ann_file='val.txt',
        img_prefix='training/image_2',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1333, 800),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=32),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ],
        data_root='/content/kitti_tiny/'),
    test=dict(
        type='KittiTinyDataset',
        ann_file='train.txt',
        img_prefix='training/image_2',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1333, 800),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=32),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ],
        data_root='/content/kitti_tiny/'))
evaluation = dict(interval=12, metric='mAP')
optimizer = dict(type='SGD', lr=0.0025, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None, type='OptimizerHook')
lr_config = dict(
    warmup=None,
    warmup_iters=500,
    warmup_ratio=0.001,
    step=[8, 11],
    type='StepLrUpdaterHook',
    policy='step')
runner = dict(type='EpochBasedRunner', max_epochs=12)
checkpoint_config = dict(interval=12, type='CheckpointHook')
log_config = dict(interval=10, hooks=[dict(type='TextLoggerHook')])
custom_hooks = [dict(type='NumClassCheckHook')]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = 'checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
resume_from = None
workflow = [('train', 1)]
work_dir = './tutorial_exps'
seed = 0
gpu_ids = range(0, 1)
from mmdet.datasets import build_dataset
from mmdet.models import build_detector
from mmdet.apis import train_detector
%cd ..
/content
#train용 데이터 셋 생성

datasets = [build_dataset(cfg.data.train)]
self.ann_file :  /content/kitti_tiny/train.txt
self.img_prefix:  /content/kitti_tiny/training/image_2


/usr/local/lib/python3.7/dist-packages/mmdet-2.17.0-py3.7.egg/mmdet/datasets/custom.py:157: UserWarning: CustomDataset does not support filtering empty gt images.
  'CustomDataset does not support filtering empty gt images.')
datasets
[
 KittiTinyDataset Train dataset with number of images 50, and instance counts: 
 +----------+-------+----------------+-------+-------------+-------+----------+-------+----------+-------+
 | category | count | category       | count | category    | count | category | count | category | count |
 +----------+-------+----------------+-------+-------------+-------+----------+-------+----------+-------+
 |          |       |                |       |             |       |          |       |          |       |
 | 0 [Car]  | 147   | 1 [Pedestrian] | 23    | 2 [Cyclist] | 7     |          |       |          |       |
 +----------+-------+----------------+-------+-------------+-------+----------+-------+----------+-------+]
datasets[0].CLASSES
('Car', 'Pedestrian', 'Cyclist')
model = build_detector(cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg') )
model.CLASSES = datasets[0].CLASSES
/usr/local/lib/python3.7/dist-packages/mmdet-2.17.0-py3.7.egg/mmdet/core/anchor/builder.py:17: UserWarning: ``build_anchor_generator`` would be deprecated soon, please use ``build_prior_generator`` 
  '``build_anchor_generator`` would be deprecated soon, please use '
%cd mmdetection/
/content/mmdetection
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_detector(model, datasets, cfg, distributed=False, validate=True)
2021-09-29 07:10:23,867 - mmdet - INFO - load checkpoint from checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
2021-09-29 07:10:23,869 - mmdet - INFO - Use load_from_local loader


self.ann_file :  /content/kitti_tiny/val.txt
self.img_prefix:  /content/kitti_tiny/training/image_2


2021-09-29 07:10:23,999 - mmdet - WARNING - The model and loaded state dict do not match exactly

size mismatch for roi_head.bbox_head.fc_cls.weight: copying a param with shape torch.Size([81, 1024]) from checkpoint, the shape in current model is torch.Size([4, 1024]).
size mismatch for roi_head.bbox_head.fc_cls.bias: copying a param with shape torch.Size([81]) from checkpoint, the shape in current model is torch.Size([4]).
size mismatch for roi_head.bbox_head.fc_reg.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([12, 1024]).
size mismatch for roi_head.bbox_head.fc_reg.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([12]).
2021-09-29 07:10:24,003 - mmdet - INFO - Start running, host: root@43665bed83ab, work_dir: /content/mmdetection/tutorial_exps
2021-09-29 07:10:24,004 - mmdet - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) StepLrUpdaterHook                  
(NORMAL      ) CheckpointHook                     
(LOW         ) EvalHook                           
(VERY_LOW    ) TextLoggerHook                     
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) StepLrUpdaterHook                  
(NORMAL      ) NumClassCheckHook                  
(LOW         ) IterTimerHook                      
(LOW         ) EvalHook                           
(VERY_LOW    ) TextLoggerHook                     
 -------------------- 
before_train_iter:
(VERY_HIGH   ) StepLrUpdaterHook                  
(LOW         ) IterTimerHook                      
(LOW         ) EvalHook                           
 -------------------- 
after_train_iter:
(ABOVE_NORMAL) OptimizerHook                      
(NORMAL      ) CheckpointHook                     
(LOW         ) IterTimerHook                      
(LOW         ) EvalHook                           
(VERY_LOW    ) TextLoggerHook                     
 -------------------- 
after_train_epoch:
(NORMAL      ) CheckpointHook                     
(LOW         ) EvalHook                           
(VERY_LOW    ) TextLoggerHook                     
 -------------------- 
before_val_epoch:
(NORMAL      ) NumClassCheckHook                  
(LOW         ) IterTimerHook                      
(VERY_LOW    ) TextLoggerHook                     
 -------------------- 
before_val_iter:
(LOW         ) IterTimerHook                      
 -------------------- 
after_val_iter:
(LOW         ) IterTimerHook                      
 -------------------- 
after_val_epoch:
(VERY_LOW    ) TextLoggerHook                     
 -------------------- 
2021-09-29 07:10:24,006 - mmdet - INFO - workflow: [('train', 1)], max: 12 epochs
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/usr/local/lib/python3.7/dist-packages/mmdet-2.17.0-py3.7.egg/mmdet/core/anchor/anchor_generator.py:324: UserWarning: ``grid_anchors`` would be deprecated soon. Please use ``grid_priors`` 
  warnings.warn('``grid_anchors`` would be deprecated soon. '
/usr/local/lib/python3.7/dist-packages/mmdet-2.17.0-py3.7.egg/mmdet/core/anchor/anchor_generator.py:361: UserWarning: ``single_level_grid_anchors`` would be deprecated soon. Please use ``single_level_grid_priors`` 
  '``single_level_grid_anchors`` would be deprecated soon. '
2021-09-29 07:10:28,653 - mmdet - INFO - Epoch [1][10/25]	lr: 2.500e-03, eta: 0:02:13, time: 0.459, data_time: 0.221, memory: 2389, loss_rpn_cls: 0.0317, loss_rpn_bbox: 0.0180, loss_cls: 0.5668, acc: 84.0430, loss_bbox: 0.4012, loss: 1.0176
2021-09-29 07:10:30,937 - mmdet - INFO - Epoch [1][20/25]	lr: 2.500e-03, eta: 0:01:36, time: 0.228, data_time: 0.009, memory: 2389, loss_rpn_cls: 0.0234, loss_rpn_bbox: 0.0126, loss_cls: 0.1950, acc: 93.9160, loss_bbox: 0.3166, loss: 0.5476
2021-09-29 07:10:36,515 - mmdet - INFO - Epoch [2][10/25]	lr: 2.500e-03, eta: 0:01:25, time: 0.440, data_time: 0.218, memory: 2389, loss_rpn_cls: 0.0159, loss_rpn_bbox: 0.0154, loss_cls: 0.1620, acc: 94.7852, loss_bbox: 0.2761, loss: 0.4694
2021-09-29 07:10:38,822 - mmdet - INFO - Epoch [2][20/25]	lr: 2.500e-03, eta: 0:01:16, time: 0.231, data_time: 0.010, memory: 2389, loss_rpn_cls: 0.0124, loss_rpn_bbox: 0.0123, loss_cls: 0.1451, acc: 94.3750, loss_bbox: 0.2205, loss: 0.3903
2021-09-29 07:10:44,392 - mmdet - INFO - Epoch [3][10/25]	lr: 2.500e-03, eta: 0:01:11, time: 0.438, data_time: 0.217, memory: 2389, loss_rpn_cls: 0.0069, loss_rpn_bbox: 0.0113, loss_cls: 0.1018, acc: 96.3477, loss_bbox: 0.1667, loss: 0.2867
2021-09-29 07:10:46,708 - mmdet - INFO - Epoch [3][20/25]	lr: 2.500e-03, eta: 0:01:06, time: 0.232, data_time: 0.009, memory: 2389, loss_rpn_cls: 0.0081, loss_rpn_bbox: 0.0120, loss_cls: 0.1560, acc: 93.9941, loss_bbox: 0.2608, loss: 0.4368
2021-09-29 07:10:52,314 - mmdet - INFO - Epoch [4][10/25]	lr: 2.500e-03, eta: 0:01:02, time: 0.440, data_time: 0.217, memory: 2389, loss_rpn_cls: 0.0070, loss_rpn_bbox: 0.0148, loss_cls: 0.1313, acc: 94.7949, loss_bbox: 0.2276, loss: 0.3807
2021-09-29 07:10:54,638 - mmdet - INFO - Epoch [4][20/25]	lr: 2.500e-03, eta: 0:00:58, time: 0.232, data_time: 0.010, memory: 2389, loss_rpn_cls: 0.0043, loss_rpn_bbox: 0.0124, loss_cls: 0.1247, acc: 95.2930, loss_bbox: 0.2077, loss: 0.3491
2021-09-29 07:11:00,250 - mmdet - INFO - Epoch [5][10/25]	lr: 2.500e-03, eta: 0:00:54, time: 0.442, data_time: 0.217, memory: 2389, loss_rpn_cls: 0.0055, loss_rpn_bbox: 0.0097, loss_cls: 0.1068, acc: 96.0449, loss_bbox: 0.1976, loss: 0.3195
2021-09-29 07:11:02,571 - mmdet - INFO - Epoch [5][20/25]	lr: 2.500e-03, eta: 0:00:50, time: 0.232, data_time: 0.010, memory: 2389, loss_rpn_cls: 0.0047, loss_rpn_bbox: 0.0118, loss_cls: 0.1054, acc: 95.8594, loss_bbox: 0.1720, loss: 0.2938
2021-09-29 07:11:08,224 - mmdet - INFO - Epoch [6][10/25]	lr: 2.500e-03, eta: 0:00:46, time: 0.446, data_time: 0.219, memory: 2389, loss_rpn_cls: 0.0042, loss_rpn_bbox: 0.0093, loss_cls: 0.0840, acc: 97.0605, loss_bbox: 0.1654, loss: 0.2629
2021-09-29 07:11:10,568 - mmdet - INFO - Epoch [6][20/25]	lr: 2.500e-03, eta: 0:00:43, time: 0.234, data_time: 0.010, memory: 2389, loss_rpn_cls: 0.0045, loss_rpn_bbox: 0.0098, loss_cls: 0.0955, acc: 96.4355, loss_bbox: 0.1811, loss: 0.2910
2021-09-29 07:11:16,224 - mmdet - INFO - Epoch [7][10/25]	lr: 2.500e-03, eta: 0:00:39, time: 0.444, data_time: 0.218, memory: 2389, loss_rpn_cls: 0.0039, loss_rpn_bbox: 0.0098, loss_cls: 0.0832, acc: 97.0020, loss_bbox: 0.1594, loss: 0.2564
2021-09-29 07:11:18,560 - mmdet - INFO - Epoch [7][20/25]	lr: 2.500e-03, eta: 0:00:36, time: 0.234, data_time: 0.010, memory: 2389, loss_rpn_cls: 0.0022, loss_rpn_bbox: 0.0119, loss_cls: 0.0956, acc: 96.3867, loss_bbox: 0.1766, loss: 0.2862
2021-09-29 07:11:24,218 - mmdet - INFO - Epoch [8][10/25]	lr: 2.500e-03, eta: 0:00:32, time: 0.444, data_time: 0.218, memory: 2389, loss_rpn_cls: 0.0019, loss_rpn_bbox: 0.0088, loss_cls: 0.0747, acc: 97.0215, loss_bbox: 0.1411, loss: 0.2265
2021-09-29 07:11:26,552 - mmdet - INFO - Epoch [8][20/25]	lr: 2.500e-03, eta: 0:00:29, time: 0.233, data_time: 0.009, memory: 2389, loss_rpn_cls: 0.0021, loss_rpn_bbox: 0.0091, loss_cls: 0.0827, acc: 96.8457, loss_bbox: 0.1648, loss: 0.2587
2021-09-29 07:11:32,195 - mmdet - INFO - Epoch [9][10/25]	lr: 2.500e-04, eta: 0:00:25, time: 0.443, data_time: 0.218, memory: 2389, loss_rpn_cls: 0.0011, loss_rpn_bbox: 0.0088, loss_cls: 0.0696, acc: 97.4707, loss_bbox: 0.1333, loss: 0.2128
2021-09-29 07:11:34,536 - mmdet - INFO - Epoch [9][20/25]	lr: 2.500e-04, eta: 0:00:22, time: 0.234, data_time: 0.009, memory: 2389, loss_rpn_cls: 0.0022, loss_rpn_bbox: 0.0076, loss_cls: 0.0670, acc: 97.2168, loss_bbox: 0.1285, loss: 0.2053
2021-09-29 07:11:40,200 - mmdet - INFO - Epoch [10][10/25]	lr: 2.500e-04, eta: 0:00:18, time: 0.444, data_time: 0.217, memory: 2389, loss_rpn_cls: 0.0039, loss_rpn_bbox: 0.0089, loss_cls: 0.0744, acc: 97.0898, loss_bbox: 0.1422, loss: 0.2295
2021-09-29 07:11:42,537 - mmdet - INFO - Epoch [10][20/25]	lr: 2.500e-04, eta: 0:00:15, time: 0.234, data_time: 0.010, memory: 2389, loss_rpn_cls: 0.0016, loss_rpn_bbox: 0.0065, loss_cls: 0.0646, acc: 97.4219, loss_bbox: 0.1253, loss: 0.1981
2021-09-29 07:11:48,168 - mmdet - INFO - Epoch [11][10/25]	lr: 2.500e-04, eta: 0:00:11, time: 0.442, data_time: 0.217, memory: 2389, loss_rpn_cls: 0.0021, loss_rpn_bbox: 0.0079, loss_cls: 0.0757, acc: 97.2656, loss_bbox: 0.1308, loss: 0.2166
2021-09-29 07:11:50,520 - mmdet - INFO - Epoch [11][20/25]	lr: 2.500e-04, eta: 0:00:08, time: 0.235, data_time: 0.010, memory: 2389, loss_rpn_cls: 0.0028, loss_rpn_bbox: 0.0089, loss_cls: 0.0669, acc: 97.6562, loss_bbox: 0.1334, loss: 0.2120
2021-09-29 07:11:56,181 - mmdet - INFO - Epoch [12][10/25]	lr: 2.500e-05, eta: 0:00:04, time: 0.444, data_time: 0.218, memory: 2389, loss_rpn_cls: 0.0024, loss_rpn_bbox: 0.0064, loss_cls: 0.0634, acc: 97.4609, loss_bbox: 0.1193, loss: 0.1915
2021-09-29 07:11:58,512 - mmdet - INFO - Epoch [12][20/25]	lr: 2.500e-05, eta: 0:00:01, time: 0.233, data_time: 0.009, memory: 2389, loss_rpn_cls: 0.0028, loss_rpn_bbox: 0.0063, loss_cls: 0.0611, acc: 97.5195, loss_bbox: 0.0994, loss: 0.1697
2021-09-29 07:11:59,659 - mmdet - INFO - Saving checkpoint at 12 epochs


[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 25/25, 15.9 task/s, elapsed: 2s, ETA:     0s
---------------iou_thr: 0.5---------------


2021-09-29 07:12:02,012 - mmdet - INFO - 
+------------+-----+------+--------+-------+
| class      | gts | dets | recall | ap    |
+------------+-----+------+--------+-------+
| Car        | 62  | 147  | 0.903  | 0.819 |
| Pedestrian | 13  | 54   | 0.923  | 0.796 |
| Cyclist    | 7   | 61   | 0.571  | 0.095 |
+------------+-----+------+--------+-------+
| mAP        |     |      |        | 0.570 |
+------------+-----+------+--------+-------+
2021-09-29 07:12:02,015 - mmdet - INFO - Epoch(val) [12][25]	AP50: 0.5700, mAP: 0.5700
import cv2
import mmcv

model.cfg = cfg

video_reader = mmcv.VideoReader("/content/data/songdo_driving_Trim2.mp4")
video_writer = None

fourcc = cv2.VideoWriter_fourcc(*'mp4v')
video_writer = cv2.VideoWriter("/content/data/songdo_driving_Trim2_out.mp4", fourcc, 
                               video_reader.fps, (video_reader.width, video_reader.height))

for frame in mmcv.track_iter_progress(video_reader):
  result = inference_detector(model, frame)
  frame = model.show_result(frame, result, score_thr=0.4)
  video_writer.write(frame)

if video_writer:
  video_writer.release()
[                                                  ] 0/2939, elapsed: 0s, ETA:

/usr/local/lib/python3.7/dist-packages/mmdet-2.17.0-py3.7.egg/mmdet/datasets/utils.py:69: UserWarning: "ImageToTensor" pipeline is replaced by "DefaultFormatBundle" for batch inference. It is recommended to manually replace it in the test data pipeline in your config file.
  'data pipeline in your config file.', UserWarning)
/usr/local/lib/python3.7/dist-packages/mmdet-2.17.0-py3.7.egg/mmdet/core/anchor/anchor_generator.py:324: UserWarning: ``grid_anchors`` would be deprecated soon. Please use ``grid_priors`` 
  warnings.warn('``grid_anchors`` would be deprecated soon. '
/usr/local/lib/python3.7/dist-packages/mmdet-2.17.0-py3.7.egg/mmdet/core/anchor/anchor_generator.py:361: UserWarning: ``single_level_grid_anchors`` would be deprecated soon. Please use ``single_level_grid_priors`` 
  '``single_level_grid_anchors`` would be deprecated soon. '


[>>>>>>>>>>>>>>>>>>>>>>>>>>>] 2939/2939, 3.4 task/s, elapsed: 853s, ETA:     0s
profile
어제보다 오늘 더 성장하는 개발자!

0개의 댓글