TransWikia.com

How to evenly distribute data to multiple GPUs using Keras

Data Science Asked by fengxu on December 15, 2020

I am using Keras=2.3.1 with Tensorflow-gpu=2.0.0 backend. While I trained model on two RTX 2080 ti 11G GPUs, it allocates all data to ‘/gpu:0’,and nothing changed with ‘/gpu:1’. Surely, the second GPU not used at all.

However, every GPU could work if I selected only one GPU.

Moreover, the two gpus can be run parallelly in Pytorch.

Follow some instances, I try to run multi-gpus with these codes:

enter image description here

enter image description here

Below is NVIDIA-SMI output when I run a multi-gpus model.

enter image description here

and cuda = 10.1, cudnn = 7.6.5.

One Answer

Check out the docs on TensorFlow GPU usage

If you wanted data parallelism where you run a copy of your model on multiple GPUs and split the data between them, you could use the tf.distribute.MirroredStrategy.

The tf.distribute.Strategy docs are also a good source to read.

Also, you should also profile your application; adding a second GPU has the potential to reduce performance depending on what your bottlenecks are.

Answered by Benji Albert on December 15, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP