Local installation: quickstart 2020

From Open Hardware Miniconf
Revision as of 11:35, 28 December 2019 by Nicola (talk | contribs)
Jump to: navigation, search

Local installation

We don't want Python 2, and people have reported problems with Python 3.7 or later. So we currently use Python 3.6.

Miniconda instructions:

  • Go to the Miniconda archive
  • Download Miniconda3-4.5.4 in the right system for you.
  • In your command line prompt, go to the directory holding the file
  • Run the script: ./Miniconda3-4.5.4-Linux-x86_64.sh (or equivalent)

This will by default add the Miniconda directory to your path. Now you can check you have Python 3.6 installed and available:

  • python3 -i
  • This should show you Python 3.6.5 | Anaconda Inc.
  • Use quit() to get out of the python shell

Get Dingocar

Go to which directory you like to keep your coding projects in.

Install Tensorflow for machine learning

  • Ubuntu
    • apt-get install -y virtualenv # Note: You may be using a different software installer
    • mkvirtualenv donkeycar -p python3
    • pip install tensorflow==1.8.0 # Note: Probably requires Python 3.5 or 3.6. People are having problems with Python 3.7

(if you get errors, you can try (re-) installing pip: python -m pip install --upgrade pip )

  • Debian, if virtualenv isn't there, try this instead
    • virtualenv donkeycar -p python3
    • cd donkeycar
    • export PATH=`pwd`/bin:$PATH
    • pip install tensorflow==1.8.0

(This seems to be v2, but 2019 instructions use 1.8?)

  • conda install tensorflow-cpu

Install Dingocar

  • pip install -e ./dingocar

Create an instance for your specific car

  • donkey createcar --path ~/mycar #give your car its own unique name here!

Training

Run these commands on your laptop / desktop to train the Neural Network ...

  • workon donkeycar # For those who have set-up a virtualenv
  • cd ohmc_car
  • python manage.py train --tub $HOME/ohmc_car/tub_$DATE --model ./models/model_$DATE.hdf5
 using donkey version: 2.5.7 ...
 loading config file: /Users/andyg/play/ai/roba_car/config.py
 config loaded
 tub_names ./tub_2019-01-15c
 train: 5740, validation: 1436
 steps_per_epoch 44
 Epoch 1/100
 2019-01-21 13:08:49.507048: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
 43/44 [============================>.] - ETA: 0s - loss: 58.5130 - angle_out_loss: 30.3421 - throttle_out_loss: 86.6839      
 Epoch 00001: val_loss improved from inf to 0.19699, saving model to ./models/roba0_2019-01-16c.hdf5
 44/44 [==============================] - 38s 874ms/step - loss: 57.1887 - angle_out_loss: 29.6601 - throttle_out_loss: 84.7172 - val_loss: 0.1970 - val_angle_out_loss: 0.3230 - val_throttle_out_loss: 0.0710

On a modern laptop, each epoch will take around 30 seconds to complete. For up-to 100 epochs. Typically, you can expect around 20 to 40 epochs before the Neural Network stop learning. That is around 10 to 20 minutes of training time.

The training command creates the Neural Network weights that represent what your DingoCar has "learned".