TensorFlow – Car Model Identification

What is TensorFlow?

Google’s TensorFlow is an open source software library for numerical computation. It contains several frameworks that allow for quick and simplified implementation of machine learning models and algorithms. Its Object Detection API is a framework that makes it easy to construct, train and deploy object detection models.

This guide shows how to apply this framework to the compcars dataset in order to train a model that is capable of identifying car models. Due to the large size of the dataset and number of classes being trained for, many of the steps included in other tutorials that can be performed manually are automated instead using bash scripts.

Install tensorflow and its libraries, following these instructions after reading the Notes. Installing the CPU version is simple and hassle free. Installing the GPU version is much more complicated but allows for significantly faster training. GPU installation will be explored in a later post.

Note: pip install tensorflow – does not download the latest models folder from tensorflow. Download the latest models folder and replace the original models folder with the new one.

Note: Copy the object_detection folder outside of the research folder into the models folder. During the installation process you’ll be asked to run a few commands from /tensorflow/models/research. Don’t. Instead, run them from /tensorflow/models. The research folder was not present in the original release of tensorflow and is not correctly referenced during program builds.

Note: Move the slim folder and object_detection folder out of models/research and into models.

——————————————————————————–———————————————————————————————————————————————–

Labeling and Extracting the Data

Request the compcars dataset from Mr. Linjie Yang, details found at http://mmlab.ie.cuhk.edu.hk/datasets/comp_cars/index.html. Once again, follow the instructions he provides to extract the files.

Note: Due to a missing dictionary within the surveillance dataset, I was not able to make use of the surveillance dataset, so instead we will only work with the web-nature dataset. Download and unzip the data files, not the sv_data files.

Once you complete the extraction instructions, you should have a folder called data with contents similar to this.

In order for TensorFlow to train its model it requires images and for those images to be labeled in some way. LabelImg is the common tool used to do this. It allows the user to select and label the class of the object being trained for and produces an xml file that contains information such as the filename, the class of the object to identify, and the xy-bounds of the object within the image. It would take days however, to label all the images in the compcars dataset so run imageLabeler inside the label folder instead to create a txt file (“labeledData”) containing the relevant data about each image.

Note: You should modify the path OUT_DIR in imageLabeler to specify where the output file is written.

  • Manually convert labeledData.csv into a csv file.
  • Open make_model_name.mat in Octave (free) or Matlab, and use their load() functions to open the file.

ex. var = load("make_model_name.mat")

  • var will contain 2 lists. make_names and model_names.
  • Copy and paste the contents of the model_names list to a txt file called models.txt.
  • In the misc folder, run modelMapper to produce a txt file called conditionMap that contains the conditional logic to be pasted into the generate_tfrecord.py file.
  • Run classMapper to create labelMap.pbtxt, a file which will also be used later to train the model.
  • Run trainTestSplit to divide labeledData.csv into two files, test.csv and train.csv. For each car model, roughly 70% of its images will be put into the training set, and 30% in the testing set.
  • Run errorFinder on train.csv to find the lines on which there are errors that were created by trainTestSplit. You will have to change the FILE path to test.csv once you’ve found any/all the errors in train.csv.
  • Run imageExtractor from within the image folder to copy all the images into a single folder which will later be used to generate the tf records.

——————————————————————————–———————————————————————————————————————————————–

Training the Model

I adapted the instructions from Singh’s tutorial to train the model. It is required to read through and download any files as described here first.

TensorFlow uses a specific file type called tfrecord to train models. Tensorflow provides a template that I modified to convert csv files into tfrecords. Run generate_tfrecord to create train.record and test.record. You may need to modify paths.

Inside tensorflow/models/object_detection…

  1. Place train.record and test.record in the data folder.
  2. Copy train.py out of the legacy folder into object_detection.
  3. Create a folder called training. Inside it place labelMap.pbtxt, ssd_mobilenet_v1_pets.config (requires modification) and the contents of the ssd_mobilenet_v1_coco_11_06_2017 folder.
  4. From inside object_detection, run the train.py file as described in Singh’s tutorial.

Et voila! The model is now being trained. The program will update the “loss” rate of the training periodically. Once the loss rate becomes acceptable or stops decreasing, you can end the training and export the model into a usable state.

You can now test this model using tensorflow’s own object detection tutorial code. You’ll have to change the paths in boxes 5 and 9.

*All scripts mentioned in this post can be found in this repository.


Setting Up Setting Up GPU Environment for Model Training Environment for Model Training