Local installation: quickstart 2020

From Open Hardware Miniconf
Revision as of 06:06, 13 January 2020 by Nicola (talk | contribs) (Feedback from Thomas Sprinkmeier - thanks!)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Local installation

This is trying to be a short setup guide: a minimal guide to what you need to get your Dingocar running. The DonkeyCar docs have a more complete guide. If you get stuck or need more information, that's the place to go.

We don't want Python 2, and people have reported problems with Python 3.7 or later. So we currently use Python 3.6.

Miniconda instructions:

  • Go to the Miniconda archive
  • Download Miniconda3-4.5.4 in the right system for you.
  • In your command line prompt, go to the directory holding the file
  • Run the script: ./Miniconda3-4.5.4-Linux-x86_64.sh (or equivalent)

This will by default add the Miniconda directory to your path. Now you can check you have Python 3.6 installed and available:

  • python3 -i
  • This should show you Python 3.6.5 | Anaconda Inc.
  • Use quit() to get out of the python shell

Get Dingocar

Go to which directory you like to keep your coding projects in.

Install Tensorflow for machine learning

  • conda install tensorflow
  • conda env create -f install/envs/ubuntu.yml or conda env create -f install/envs/windows.yml or conda env create -f install/envs/mac.yml (on a different environment? Take a look at what's in install/envs and find one that's right for you)

Install Dingocar

  • conda activate dingo
  • pip install -e ./dingocar

Create an instance for your specific car

  • donkey createcar --path ~/mycar #give your car its own unique name here!

Training

Run these commands on your laptop / desktop to train the Neural Network ...

  • conda activate dingo
  • cd ohmc_car
  • python manage.py train --tub $HOME/ohmc_car/tub_$DATE --model ./models/model_$DATE.hdf5
 using donkey version: 2.5.7 ...
 loading config file: /Users/andyg/play/ai/roba_car/config.py
 config loaded
 tub_names ./tub_2019-01-15c
 train: 5740, validation: 1436
 steps_per_epoch 44
 Epoch 1/100
 2019-01-21 13:08:49.507048: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
 43/44 [============================>.] - ETA: 0s - loss: 58.5130 - angle_out_loss: 30.3421 - throttle_out_loss: 86.6839      
 Epoch 00001: val_loss improved from inf to 0.19699, saving model to ./models/roba0_2019-01-16c.hdf5
 44/44 [==============================] - 38s 874ms/step - loss: 57.1887 - angle_out_loss: 29.6601 - throttle_out_loss: 84.7172 - val_loss: 0.1970 - val_angle_out_loss: 0.3230 - val_throttle_out_loss: 0.0710

On a modern laptop, each epoch will take around 30 seconds to complete. For up-to 100 epochs. Typically, you can expect around 20 to 40 epochs before the Neural Network stop learning. That is around 10 to 20 minutes of training time.

The training command creates the Neural Network weights that represent what your DingoCar has "learned".