Loading

F1 CAR DETECTION

[Baseline] F1 Detection

Baseline notebook for F1 Detection Challenge of Blitz 8

Shubhamaicrowd

image.png

Getting Started Code for F1 Car Detection Challenge on AIcrowd

Author : Shubhamai

Download Necessary Packages 📚

In [1]:
# Installing the AIcrowd CLI
!pip install aicrowd-cli

# Installing PyTorch
!pip install pyyaml==5.1
!pip install torch==1.7.1 torchvision==0.8.2
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
!gcc --version

# Installing Detectron2
import torch
assert torch.__version__.startswith("1.7")   
!pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.7/index.html
Collecting aicrowd-cli
  Downloading https://files.pythonhosted.org/packages/a5/8a/fca67e8c1cb1501a9653cd653232bf6fdebbb2393e3de861aad3636a1136/aicrowd_cli-0.1.6-py3-none-any.whl (51kB)
     |████████████████████████████████| 61kB 7.7MB/s 
Requirement already satisfied: toml<1,>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.10.2)
Collecting requests<3,>=2.25.1
  Downloading https://files.pythonhosted.org/packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl (61kB)
     |████████████████████████████████| 61kB 7.6MB/s 
Collecting tqdm<5,>=4.56.0
  Downloading https://files.pythonhosted.org/packages/72/8a/34efae5cf9924328a8f34eeb2fdaae14c011462d9f0e3fcded48e1266d1c/tqdm-4.60.0-py2.py3-none-any.whl (75kB)
     |████████████████████████████████| 81kB 9.5MB/s 
Collecting rich<11,>=10.0.0
  Downloading https://files.pythonhosted.org/packages/1a/da/2a1f064dc620ab47f3f826ae085384084b71ea05c8c21d67f1dfc29189ab/rich-10.1.0-py3-none-any.whl (201kB)
     |████████████████████████████████| 204kB 44.4MB/s 
Collecting gitpython<4,>=3.1.12
  Downloading https://files.pythonhosted.org/packages/a6/99/98019716955ba243657daedd1de8f3a88ca1f5b75057c38e959db22fb87b/GitPython-3.1.14-py3-none-any.whl (159kB)
     |████████████████████████████████| 163kB 53.3MB/s 
Requirement already satisfied: click<8,>=7.1.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (7.1.2)
Collecting requests-toolbelt<1,>=0.9.1
  Downloading https://files.pythonhosted.org/packages/60/ef/7681134338fc097acef8d9b2f8abe0458e4d87559c689a8c306d0957ece5/requests_toolbelt-0.9.1-py2.py3-none-any.whl (54kB)
     |████████████████████████████████| 61kB 9.9MB/s 
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.10)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (3.0.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2020.12.5)
Collecting commonmark<0.10.0,>=0.9.0
  Downloading https://files.pythonhosted.org/packages/b1/92/dfd892312d822f36c55366118b95d914e5f16de11044a27cf10a7d71bbbf/commonmark-0.9.1-py2.py3-none-any.whl (51kB)
     |████████████████████████████████| 51kB 6.9MB/s 
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.6.1)
Collecting colorama<0.5.0,>=0.4.0
  Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl
Requirement already satisfied: typing-extensions<4.0.0,>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (3.7.4.3)
Collecting gitdb<5,>=4.0.1
  Downloading https://files.pythonhosted.org/packages/ea/e8/f414d1a4f0bbc668ed441f74f44c116d9816833a48bf81d22b697090dba8/gitdb-4.0.7-py3-none-any.whl (63kB)
     |████████████████████████████████| 71kB 10.8MB/s 
Collecting smmap<5,>=3.0.1
  Downloading https://files.pythonhosted.org/packages/68/ee/d540eb5e5996eb81c26ceffac6ee49041d473bc5125f2aa995cf51ec1cf1/smmap-4.0.0-py2.py3-none-any.whl
ERROR: google-colab 1.0.0 has requirement requests~=2.23.0, but you'll have requests 2.25.1 which is incompatible.
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.
Installing collected packages: requests, tqdm, commonmark, colorama, rich, smmap, gitdb, gitpython, requests-toolbelt, aicrowd-cli
  Found existing installation: requests 2.23.0
    Uninstalling requests-2.23.0:
      Successfully uninstalled requests-2.23.0
  Found existing installation: tqdm 4.41.1
    Uninstalling tqdm-4.41.1:
      Successfully uninstalled tqdm-4.41.1
Successfully installed aicrowd-cli-0.1.6 colorama-0.4.4 commonmark-0.9.1 gitdb-4.0.7 gitpython-3.1.14 requests-2.25.1 requests-toolbelt-0.9.1 rich-10.1.0 smmap-4.0.0 tqdm-4.60.0
Collecting pyyaml==5.1
  Downloading https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz (274kB)
     |████████████████████████████████| 276kB 17.7MB/s 
Building wheels for collected packages: pyyaml
  Building wheel for pyyaml (setup.py) ... done
  Created wheel for pyyaml: filename=PyYAML-5.1-cp37-cp37m-linux_x86_64.whl size=44074 sha256=51313ddc29110e105a628906533694dfe200f200e7609c02665c37e8b98a8256
  Stored in directory: /root/.cache/pip/wheels/ad/56/bc/1522f864feb2a358ea6f1a92b4798d69ac783a28e80567a18b
Successfully built pyyaml
Installing collected packages: pyyaml
  Found existing installation: PyYAML 3.13
    Uninstalling PyYAML-3.13:
      Successfully uninstalled PyYAML-3.13
Successfully installed pyyaml-5.1
Collecting torch==1.7.1
  Downloading https://files.pythonhosted.org/packages/90/5d/095ddddc91c8a769a68c791c019c5793f9c4456a688ddd235d6670924ecb/torch-1.7.1-cp37-cp37m-manylinux1_x86_64.whl (776.8MB)
     |████████████████████████████████| 776.8MB 16kB/s 
Collecting torchvision==0.8.2
  Downloading https://files.pythonhosted.org/packages/94/df/969e69a94cff1c8911acb0688117f95e1915becc1e01c73e7960a2c76ec8/torchvision-0.8.2-cp37-cp37m-manylinux1_x86_64.whl (12.8MB)
     |████████████████████████████████| 12.8MB 243kB/s 
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.7.1) (3.7.4.3)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch==1.7.1) (1.19.5)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from torchvision==0.8.2) (7.1.2)
ERROR: torchtext 0.9.1 has requirement torch==1.8.1, but you'll have torch 1.7.1 which is incompatible.
Installing collected packages: torch, torchvision
  Found existing installation: torch 1.8.1+cu101
    Uninstalling torch-1.8.1+cu101:
      Successfully uninstalled torch-1.8.1+cu101
  Found existing installation: torchvision 0.9.1+cu101
    Uninstalling torchvision-0.9.1+cu101:
      Successfully uninstalled torchvision-0.9.1+cu101
Successfully installed torch-1.7.1 torchvision-0.8.2
1.7.1 True
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Looking in links: https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.7/index.html
Collecting detectron2
  Downloading https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.7/detectron2-0.4%2Bcu101-cp37-cp37m-linux_x86_64.whl (6.0MB)
     |████████████████████████████████| 6.0MB 616kB/s 
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from detectron2) (3.2.2)
Collecting omegaconf>=2
  Downloading https://files.pythonhosted.org/packages/d0/eb/9d63ce09dd8aa85767c65668d5414958ea29648a0eec80a4a7d311ec2684/omegaconf-2.0.6-py3-none-any.whl
Requirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from detectron2) (0.8.9)
Collecting yacs>=0.1.6
  Downloading https://files.pythonhosted.org/packages/38/4f/fe9a4d472aa867878ce3bb7efb16654c5d63672b86dc0e6e953a67018433/yacs-0.1.8-py3-none-any.whl
Requirement already satisfied: tensorboard in /usr/local/lib/python3.7/dist-packages (from detectron2) (2.4.1)
Collecting iopath>=0.1.2
  Downloading https://files.pythonhosted.org/packages/21/d0/22104caed16fa41382702fed959f4a9b088b2f905e7a82e4483180a2ec2a/iopath-0.1.8-py3-none-any.whl
Requirement already satisfied: termcolor>=1.1 in /usr/local/lib/python3.7/dist-packages (from detectron2) (1.1.0)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.7/dist-packages (from detectron2) (1.3.0)
Collecting fvcore<0.1.4,>=0.1.3
  Downloading https://files.pythonhosted.org/packages/6b/68/2bacb80e13c4084dfc37fec8f17706a1de4c248157561ff33e463399c4f5/fvcore-0.1.3.post20210317.tar.gz (47kB)
     |████████████████████████████████| 51kB 6.3MB/s 
Requirement already satisfied: pydot in /usr/local/lib/python3.7/dist-packages (from detectron2) (1.3.0)
Requirement already satisfied: pycocotools>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from detectron2) (2.0.2)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from detectron2) (0.16.0)
Requirement already satisfied: Pillow>=7.1 in /usr/local/lib/python3.7/dist-packages (from detectron2) (7.1.2)
Requirement already satisfied: tqdm>4.29.0 in /usr/local/lib/python3.7/dist-packages (from detectron2) (4.60.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->detectron2) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->detectron2) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->detectron2) (2.8.1)
Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.7/dist-packages (from matplotlib->detectron2) (1.19.5)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->detectron2) (1.3.1)
Requirement already satisfied: PyYAML>=5.1.* in /usr/local/lib/python3.7/dist-packages (from omegaconf>=2->detectron2) (5.1)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from omegaconf>=2->detectron2) (3.7.4.3)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (3.3.4)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (1.28.1)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (3.12.4)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (2.25.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (1.8.0)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (1.32.0)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (0.12.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (1.0.1)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (0.36.2)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (56.1.0)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (1.15.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2) (0.4.4)
Collecting portalocker
  Downloading https://files.pythonhosted.org/packages/68/33/cb524f4de298509927b90aa5ee34767b9a2b93e663cf354b2a3efa2b4acd/portalocker-2.3.0-py2.py3-none-any.whl
Requirement already satisfied: cython>=0.27.3 in /usr/local/lib/python3.7/dist-packages (from pycocotools>=2.0.2->detectron2) (0.29.22)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard->detectron2) (3.10.1)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2) (4.2.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2) (0.2.8)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2) (2.10)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2) (3.0.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2) (2020.12.5)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2) (1.3.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard->detectron2) (3.4.1)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.7/dist-packages (from rsa<5,>=3.1.4; python_version >= "3.6"->google-auth<2,>=1.6.3->tensorboard->detectron2) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2) (3.1.0)
Building wheels for collected packages: fvcore
  Building wheel for fvcore (setup.py) ... done
  Created wheel for fvcore: filename=fvcore-0.1.3.post20210317-cp37-none-any.whl size=58543 sha256=976f5a167f93d549e6f31be33b116cf4212db51781b97424cfa9469cd9c1adab
  Stored in directory: /root/.cache/pip/wheels/d2/ee/3a/5c531df777c03d8c67f22c65f97d6f75321087482d05a9b218
Successfully built fvcore
Installing collected packages: omegaconf, yacs, portalocker, iopath, fvcore, detectron2
Successfully installed detectron2-0.4+cu101 fvcore-0.1.3.post20210317 iopath-0.1.8 omegaconf-2.0.6 portalocker-2.3.0 yacs-0.1.8

Download Data ⏬

The first step is to download out train test data. We will be training a model on the train data and make predictions on test data. We submit our predictions.

In [2]:
API_KEY = "6c4dc1818e928ae4f5ca02323114f662" #Please enter your API Key from [https://www.aicrowd.com/participants/me]
!aicrowd login --api-key $API_KEY
API Key valid
Saved API Key successfully!
In [3]:
!aicrowd dataset download --challenge f1-car-detection -j 3
sample_submission.csv: 100% 228k/228k [00:00<00:00, 1.16MB/s]
test.zip:   0% 0.00/32.5M [00:00<?, ?B/s]
train.zip:   0% 0.00/131M [00:00<?, ?B/s]

train.csv:   0% 0.00/547k [00:00<?, ?B/s]

train.csv: 100% 547k/547k [00:00<00:00, 1.61MB/s]


val.csv: 100% 52.6k/52.6k [00:00<00:00, 559kB/s]

train.zip:  26% 33.6M/131M [00:01<00:03, 28.2MB/s]

test.zip: 100% 32.5M/32.5M [00:01<00:00, 17.4MB/s]

train.zip:  51% 67.1M/131M [00:01<00:01, 36.0MB/s]

val.zip: 100% 13.1M/13.1M [00:00<00:00, 13.7MB/s]

train.zip:  77% 101M/131M [00:02<00:00, 38.1MB/s] 
train.zip: 100% 131M/131M [00:03<00:00, 37.8MB/s]

Below, we create a new directory to put our downloaded data! 🏎

We unzip the ZIP files and move the CSVs.

In [4]:
!rm -rf data
!mkdir data

!unzip train.zip -d data/train > /dev/null

!unzip val.zip -d data/val > /dev/null

!unzip test.zip -d data/test > /dev/null

!mv train.csv data/train.csv
!mv val.csv data/val.csv

Import packages 📦

It's time to import all the packages that we have downloaded and the packages that we will be needing for building our model.

In [5]:
import torch, torchvision
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()

from detectron2 import model_zoo
from detectron2.engine import DefaultTrainer, DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.structures import BoxMode

import pandas as pd
import numpy as np
import os
from PIL import Image, ImageDraw
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
import cv2
import random
from ast import literal_eval

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here 🤓
In [6]:
data_path = "data"

train_df = pd.read_csv(os.path.join(data_path, "train.csv"))
val_df = pd.read_csv(os.path.join(data_path, "val.csv"))

Visualize the data 👀

Using Pandas and the Matplot Library in Python, we will be viewing the images in our datasets.

In [7]:
train_df
Out[7]:
ImageID bboxes
0 0 [85, 174, 87, 161]
1 1 [72, 165, 72, 169]
2 2 [36, 215, 63, 189]
3 3 [52, 202, 69, 207]
4 4 [83, 146, 53, 157]
... ... ...
19995 19995 [113, 193, 77, 188]
19996 19996 [46, 211, 115, 163]
19997 19997 [45, 225, 65, 172]
19998 19998 [52, 149, 59, 153]
19999 19999 [75, 189, 86, 153]

20000 rows × 2 columns

In [8]:
# Defining a function to take a look at the images
def show_images(images, num = 5):
    
    images_to_show = np.random.choice(images, num)

    for image_id in images_to_show:

        image = Image.open(os.path.join(data_path, f"train/{image_id}.jpg"))
  
        bbox = literal_eval(train_df.loc[train_df['ImageID'] == image_id]['bboxes'].values[0])

        draw = ImageDraw.Draw(image)

        draw.rectangle([bbox[0], bbox[2], bbox[1], bbox[3]], width=1)

        plt.figure(figsize = (15,15))
        plt.imshow(image)
        plt.show()

show_images(train_df['ImageID'].unique(), num = 5)

Creating Dataset 🎈

In the section below, we will be creating the dataset that will be put into our Model for training!

In [9]:
dict_dataset = []
def get_dataset_dics():

    for index, row in train_df.iterrows():

        image = Image.open(os.path.join(data_path, f"train/{row['ImageID']}.jpg"))
        w, h = image.size
        
        ann_lst = []

        bbox = literal_eval(row['bboxes'])
    
        ann_dict = {'bbox': [bbox[0], bbox[2], bbox[1], bbox[3]],
        'bbox_mode': BoxMode.XYXY_ABS,
        'category_id': 0, #i[1]['category_id'].values[0],
        'iscrowd': 0}
        
        ann_lst.append(ann_dict)

        image_dict = {'annotations': ann_lst,
            'file_name': os.path.join(data_path, f"train/{row['ImageID']}.jpg"),
            'height': h,
            'image_id': row["ImageID"], #i[1]['image_category_id'].values[0],
            'width': w}
          
        dict_dataset.append(image_dict)

    return dict_dataset

dict_dataset = get_dataset_dics()
In [10]:
d = f"f1_train{np.random.randint(10000)}"
DatasetCatalog.register(d, lambda d=d : get_dataset_dics())
MetadataCatalog.get(d).set(thing_classes=["F1Cars"])
obj_metadata = MetadataCatalog.get(d)
In [11]:
for i in random.sample(dict_dataset, 3):
    img = cv2.imread(i["file_name"])
    visualizer = Visualizer(img, metadata=obj_metadata, scale=0.5)
    out = visualizer.draw_dataset_dict(i)
    plt.imshow(out.get_image())

Creating the Model 🏎

Now that we have the dataset is ready, it's time to create a model that we will train on our data!

In [12]:
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_DC5_3x.yaml"))
cfg.DATASETS.TRAIN = (d,)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_DC5_3x.yaml") 
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025
cfg.SOLVER.MAX_ITER = 500
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128  
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1

os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)

Train the Model 🏃🏽‍♂️

In [13]:
trainer = DefaultTrainer(cfg) 
trainer.resume_or_load(resume=False)
trainer.train()
[05/10 11:29:16 d2.engine.defaults]: Model:
GeneralizedRCNN(
  (backbone): ResNet(
    (stem): BasicStem(
      (conv1): Conv2d(
        3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
        (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
      )
    )
    (res2): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv1): Conv2d(
          64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv2): Conv2d(
          64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv3): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv2): Conv2d(
          64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv3): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv2): Conv2d(
          64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
        (conv3): Conv2d(
          64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
      )
    )
    (res3): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv1): Conv2d(
          256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
      (3): BottleneckBlock(
        (conv1): Conv2d(
          512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv2): Conv2d(
          128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
        )
        (conv3): Conv2d(
          128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
      )
    )
    (res4): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
        (conv1): Conv2d(
          512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (3): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (4): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
      (5): BottleneckBlock(
        (conv1): Conv2d(
          1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv2): Conv2d(
          256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
        )
        (conv3): Conv2d(
          256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
        )
      )
    )
    (res5): Sequential(
      (0): BottleneckBlock(
        (shortcut): Conv2d(
          1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
        (conv1): Conv2d(
          1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv2): Conv2d(
          512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv3): Conv2d(
          512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
      )
      (1): BottleneckBlock(
        (conv1): Conv2d(
          2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv2): Conv2d(
          512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv3): Conv2d(
          512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
      )
      (2): BottleneckBlock(
        (conv1): Conv2d(
          2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv2): Conv2d(
          512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False
          (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
        )
        (conv3): Conv2d(
          512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
          (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
        )
      )
    )
  )
  (proposal_generator): RPN(
    (rpn_head): StandardRPNHead(
      (conv): Conv2d(2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (objectness_logits): Conv2d(2048, 15, kernel_size=(1, 1), stride=(1, 1))
      (anchor_deltas): Conv2d(2048, 60, kernel_size=(1, 1), stride=(1, 1))
    )
    (anchor_generator): DefaultAnchorGenerator(
      (cell_anchors): BufferList()
    )
  )
  (roi_heads): StandardROIHeads(
    (box_pooler): ROIPooler(
      (level_poolers): ModuleList(
        (0): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
      )
    )
    (box_head): FastRCNNConvFCHead(
      (flatten): Flatten(start_dim=1, end_dim=-1)
      (fc1): Linear(in_features=100352, out_features=1024, bias=True)
      (fc_relu1): ReLU()
      (fc2): Linear(in_features=1024, out_features=1024, bias=True)
      (fc_relu2): ReLU()
    )
    (box_predictor): FastRCNNOutputLayers(
      (cls_score): Linear(in_features=1024, out_features=2, bias=True)
      (bbox_pred): Linear(in_features=1024, out_features=4, bias=True)
    )
  )
)
[05/10 11:29:21 d2.data.build]: Removed 0 images with no usable annotations. 40000 images left.
[05/10 11:29:23 d2.data.build]: Distribution of instances among all 1 categories:
|  category  | #instances   |
|:----------:|:-------------|
|   F1Cars   | 40000        |
|            |              |
[05/10 11:29:23 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[05/10 11:29:23 d2.data.build]: Using training sampler TrainingSampler
[05/10 11:29:23 d2.data.common]: Serializing 40000 elements to byte tensors and concatenating them all ...
[05/10 11:29:23 d2.data.common]: Serialized dataset takes 8.22 MiB
WARNING [05/10 11:29:23 d2.solver.build]: SOLVER.STEPS contains values larger than SOLVER.MAX_ITER. These values will be ignored.
model_final_68d202.pkl: 663MB [00:29, 22.5MB/s]                           
Skip loading parameter 'roi_heads.box_predictor.cls_score.weight' to the model due to incompatible shapes: (81, 1024) in the checkpoint but (2, 1024) in the model! You might want to double check if this is expected.
Skip loading parameter 'roi_heads.box_predictor.cls_score.bias' to the model due to incompatible shapes: (81,) in the checkpoint but (2,) in the model! You might want to double check if this is expected.
Skip loading parameter 'roi_heads.box_predictor.bbox_pred.weight' to the model due to incompatible shapes: (320, 1024) in the checkpoint but (4, 1024) in the model! You might want to double check if this is expected.
Skip loading parameter 'roi_heads.box_predictor.bbox_pred.bias' to the model due to incompatible shapes: (320,) in the checkpoint but (4,) in the model! You might want to double check if this is expected.
[05/10 11:29:55 d2.engine.train_loop]: Starting training from iteration 0
/usr/local/lib/python3.7/dist-packages/detectron2/modeling/roi_heads/fast_rcnn.py:103: UserWarning: This overload of nonzero is deprecated:
	nonzero()
Consider using one of the following signatures instead:
	nonzero(*, bool as_tuple) (Triggered internally at  /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
  num_fg = fg_inds.nonzero().numel()
[05/10 11:30:05 d2.utils.events]:  eta: 0:03:37  iter: 19  total_loss: 1.634  loss_cls: 0.802  loss_box_reg: 0.7323  loss_rpn_cls: 0.02813  loss_rpn_loc: 0.04873  time: 0.4645  data_time: 0.0170  lr: 4.9953e-06  max_mem: 3594M
[05/10 11:30:15 d2.utils.events]:  eta: 0:03:32  iter: 39  total_loss: 1.536  loss_cls: 0.7207  loss_box_reg: 0.736  loss_rpn_cls: 0.02136  loss_rpn_loc: 0.0647  time: 0.4753  data_time: 0.0057  lr: 9.9902e-06  max_mem: 3594M
[05/10 11:30:24 d2.utils.events]:  eta: 0:03:21  iter: 59  total_loss: 1.362  loss_cls: 0.5849  loss_box_reg: 0.7004  loss_rpn_cls: 0.01854  loss_rpn_loc: 0.04776  time: 0.4761  data_time: 0.0059  lr: 1.4985e-05  max_mem: 3594M
[05/10 11:30:34 d2.utils.events]:  eta: 0:03:15  iter: 79  total_loss: 1.261  loss_cls: 0.4737  loss_box_reg: 0.7134  loss_rpn_cls: 0.02157  loss_rpn_loc: 0.03446  time: 0.4797  data_time: 0.0048  lr: 1.998e-05  max_mem: 3594M
[05/10 11:30:44 d2.utils.events]:  eta: 0:03:08  iter: 99  total_loss: 1.226  loss_cls: 0.4021  loss_box_reg: 0.7667  loss_rpn_cls: 0.01176  loss_rpn_loc: 0.02969  time: 0.4858  data_time: 0.0043  lr: 2.4975e-05  max_mem: 3594M
[05/10 11:30:54 d2.utils.events]:  eta: 0:02:59  iter: 119  total_loss: 1.239  loss_cls: 0.3621  loss_box_reg: 0.7145  loss_rpn_cls: 0.01664  loss_rpn_loc: 0.03486  time: 0.4852  data_time: 0.0056  lr: 2.997e-05  max_mem: 3594M
[05/10 11:31:04 d2.utils.events]:  eta: 0:02:50  iter: 139  total_loss: 1.11  loss_cls: 0.3159  loss_box_reg: 0.701  loss_rpn_cls: 0.01071  loss_rpn_loc: 0.03388  time: 0.4863  data_time: 0.0045  lr: 3.4965e-05  max_mem: 3594M
[05/10 11:31:14 d2.utils.events]:  eta: 0:02:41  iter: 159  total_loss: 1.058  loss_cls: 0.2813  loss_box_reg: 0.7195  loss_rpn_cls: 0.01116  loss_rpn_loc: 0.0298  time: 0.4886  data_time: 0.0046  lr: 3.996e-05  max_mem: 3594M
[05/10 11:31:24 d2.utils.events]:  eta: 0:02:32  iter: 179  total_loss: 1.025  loss_cls: 0.2499  loss_box_reg: 0.7425  loss_rpn_cls: 0.009873  loss_rpn_loc: 0.01977  time: 0.4923  data_time: 0.0052  lr: 4.4955e-05  max_mem: 3594M
[05/10 11:31:34 d2.utils.events]:  eta: 0:02:23  iter: 199  total_loss: 0.9847  loss_cls: 0.2051  loss_box_reg: 0.7388  loss_rpn_cls: 0.008901  loss_rpn_loc: 0.02183  time: 0.4925  data_time: 0.0044  lr: 4.995e-05  max_mem: 3594M
[05/10 11:31:45 d2.utils.events]:  eta: 0:02:14  iter: 219  total_loss: 0.9625  loss_cls: 0.176  loss_box_reg: 0.7211  loss_rpn_cls: 0.007551  loss_rpn_loc: 0.04124  time: 0.4950  data_time: 0.0047  lr: 5.4945e-05  max_mem: 3594M
[05/10 11:31:55 d2.utils.events]:  eta: 0:02:05  iter: 239  total_loss: 0.826  loss_cls: 0.1483  loss_box_reg: 0.6239  loss_rpn_cls: 0.007559  loss_rpn_loc: 0.02471  time: 0.4987  data_time: 0.0044  lr: 5.994e-05  max_mem: 3594M
[05/10 11:32:05 d2.utils.events]:  eta: 0:01:55  iter: 259  total_loss: 0.8492  loss_cls: 0.1207  loss_box_reg: 0.6661  loss_rpn_cls: 0.009406  loss_rpn_loc: 0.02568  time: 0.4971  data_time: 0.0047  lr: 6.4935e-05  max_mem: 3594M
[05/10 11:32:15 d2.utils.events]:  eta: 0:01:46  iter: 279  total_loss: 0.7633  loss_cls: 0.1019  loss_box_reg: 0.6065  loss_rpn_cls: 0.004824  loss_rpn_loc: 0.02772  time: 0.4983  data_time: 0.0049  lr: 6.993e-05  max_mem: 3594M
[05/10 11:32:26 d2.utils.events]:  eta: 0:01:37  iter: 299  total_loss: 0.6911  loss_cls: 0.08825  loss_box_reg: 0.5532  loss_rpn_cls: 0.01017  loss_rpn_loc: 0.03419  time: 0.5000  data_time: 0.0045  lr: 7.4925e-05  max_mem: 3594M
[05/10 11:32:36 d2.utils.events]:  eta: 0:01:28  iter: 319  total_loss: 0.6219  loss_cls: 0.08447  loss_box_reg: 0.4978  loss_rpn_cls: 0.01229  loss_rpn_loc: 0.01594  time: 0.5008  data_time: 0.0052  lr: 7.992e-05  max_mem: 3594M
[05/10 11:32:46 d2.utils.events]:  eta: 0:01:18  iter: 339  total_loss: 0.4699  loss_cls: 0.06054  loss_box_reg: 0.3347  loss_rpn_cls: 0.004029  loss_rpn_loc: 0.02019  time: 0.5020  data_time: 0.0050  lr: 8.4915e-05  max_mem: 3594M
[05/10 11:32:57 d2.utils.events]:  eta: 0:01:08  iter: 359  total_loss: 0.3651  loss_cls: 0.05393  loss_box_reg: 0.2527  loss_rpn_cls: 0.007389  loss_rpn_loc: 0.03143  time: 0.5028  data_time: 0.0048  lr: 8.991e-05  max_mem: 3594M
[05/10 11:33:08 d2.utils.events]:  eta: 0:00:59  iter: 379  total_loss: 0.3067  loss_cls: 0.0551  loss_box_reg: 0.2094  loss_rpn_cls: 0.005812  loss_rpn_loc: 0.02022  time: 0.5048  data_time: 0.0049  lr: 9.4905e-05  max_mem: 3594M
[05/10 11:33:18 d2.utils.events]:  eta: 0:00:49  iter: 399  total_loss: 0.3026  loss_cls: 0.04652  loss_box_reg: 0.2067  loss_rpn_cls: 0.007526  loss_rpn_loc: 0.02422  time: 0.5063  data_time: 0.0052  lr: 9.99e-05  max_mem: 3594M
[05/10 11:33:29 d2.utils.events]:  eta: 0:00:39  iter: 419  total_loss: 0.2762  loss_cls: 0.04638  loss_box_reg: 0.1877  loss_rpn_cls: 0.005538  loss_rpn_loc: 0.00905  time: 0.5074  data_time: 0.0051  lr: 0.0001049  max_mem: 3594M
[05/10 11:33:40 d2.utils.events]:  eta: 0:00:29  iter: 439  total_loss: 0.2375  loss_cls: 0.03383  loss_box_reg: 0.161  loss_rpn_cls: 0.002873  loss_rpn_loc: 0.01939  time: 0.5087  data_time: 0.0046  lr: 0.00010989  max_mem: 3594M
[05/10 11:33:51 d2.utils.events]:  eta: 0:00:19  iter: 459  total_loss: 0.246  loss_cls: 0.04518  loss_box_reg: 0.183  loss_rpn_cls: 0.002077  loss_rpn_loc: 0.02518  time: 0.5106  data_time: 0.0044  lr: 0.00011489  max_mem: 3594M
[05/10 11:34:02 d2.utils.events]:  eta: 0:00:09  iter: 479  total_loss: 0.2288  loss_cls: 0.03932  loss_box_reg: 0.155  loss_rpn_cls: 0.0006283  loss_rpn_loc: 0.01971  time: 0.5133  data_time: 0.0048  lr: 0.00011988  max_mem: 3594M
[05/10 11:34:20 d2.utils.events]:  eta: 0:00:00  iter: 499  total_loss: 0.2205  loss_cls: 0.03618  loss_box_reg: 0.1511  loss_rpn_cls: 0.00108  loss_rpn_loc: 0.02636  time: 0.5151  data_time: 0.0042  lr: 0.00012488  max_mem: 3594M
[05/10 11:34:21 d2.engine.hooks]: Overall training speed: 498 iterations in 0:04:16 (0.5151 s / it)
[05/10 11:34:21 d2.engine.hooks]: Total training time: 0:04:24 (0:00:08 on hooks)

Let's take a look at the loss and several other metrics by running the pice of code below:

In [14]:
%load_ext tensorboard
%tensorboard --logdir output
Output hidden; open in https://colab.research.google.com to view.

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Loading Pretrained Model

Before we predict on any data, let's quickly reload the model to predict on our dataset.

In [15]:
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")  
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
predictor = DefaultPredictor(cfg)

Predict Test Set

Predict on the test set and you are all set to make the submission!

In [16]:
test_imgs_paths = os.listdir(os.path.join(data_path, "test"))

predictions = {"ImageID":[], "bboxes":[]}

for test_img_path in tqdm(test_imgs_paths):

  img = cv2.imread(os.path.join(data_path, "test", test_img_path))
  h, w, _ = img.shape

  model_predictions = predictor(img)

  bboxes = model_predictions['instances'].pred_boxes.tensor.cpu().numpy().tolist()
  scores = model_predictions['instances'].scores.cpu().numpy().tolist()

  for n, bbox in enumerate(bboxes):

      bbox.append(scores[n])

      bboxes[n] = bbox

  image_id = test_img_path.split('.')[0]

  predictions['ImageID'].append(image_id)
  predictions['bboxes'].append(bboxes)

Save the prediction to csv

We have the DataFrame for our predictions are ready for submission. Now, we will make a submission to the AIcrowd platform.

You can directly upload the CSV to the challenge or use the AIcrowd CLI to make a final submission directly from this Colab Notebook.

In [17]:
submission = pd.DataFrame(predictions)
submission
Out[17]:
ImageID bboxes
0 2909 [[62.43992233276367, 70.71259307861328, 169.78...
1 2055 [[84.41236114501953, 12.993408203125, 133.1095...
2 3402 [[0.0, 56.31496810913086, 244.97674560546875, ...
3 3780 [[37.95216369628906, 50.83466339111328, 146.76...
4 3483 [[56.06443786621094, 111.90695190429688, 178.8...
... ... ...
4995 1388 [[61.267860412597656, 64.96198272705078, 227.1...
4996 1966 [[50.08342361450195, 91.05731964111328, 139.40...
4997 3026 [[0.0, 44.75455856323242, 164.33328247070312, ...
4998 4511 [[76.23158264160156, 76.88059997558594, 192.80...
4999 4688 [[42.6381721496582, 67.3145751953125, 221.9949...

5000 rows × 2 columns

In [18]:
submission.to_csv("submission.csv", index=False)

Making Direct Submission

Submitting with a single line of code.

In [19]:
!aicrowd submission create -c f1-car-detection -f submission.csv
submission.csv ━━━━━━━━━━━━━━━━━━━━ 100.0%703.6/702.0 KB1.5 MB/s0:00:00
                                                 ╭─────────────────────────╮                                                  
                                                 │ Successfully submitted! │                                                  
                                                 ╰─────────────────────────╯                                                  
                                                       Important links                                                        
┌──────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│  This submission │ https://www.aicrowd.com/challenges/ai-blitz-8/problems/f1-car-detection/submissions/135890              │
│                  │                                                                                                         │
│  All submissions │ https://www.aicrowd.com/challenges/ai-blitz-8/problems/f1-car-detection/submissions?my_submissions=true │
│                  │                                                                                                         │
│      Leaderboard │ https://www.aicrowd.com/challenges/ai-blitz-8/problems/f1-car-detection/leaderboards                    │
│                  │                                                                                                         │
│ Discussion forum │ https://discourse.aicrowd.com/c/ai-blitz-8                                                              │
│                  │                                                                                                         │
│   Challenge page │ https://www.aicrowd.com/challenges/ai-blitz-8/problems/f1-car-detection                                 │
└──────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────┘
{'submission_id': 135890, 'created_at': '2021-05-10T11:50:48.043Z'}
In [19]:


Comments

shashank_dhanai
Over 3 years ago

how do i transfer this data to colab , as i don’t have enough internet to download the data.. can you help me with the instruction please

You must login before you can post a comment.

Execute