I am new to TensorFlow. I have recently installed it (Windows CPU version) and received the following message:
Successfully installed tensorflow-1.4.0 tensorflow-tensorboard-0.4.0rc2
Then when I tried to run
import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() sess.run(hello) 'Hello, TensorFlow!' a = tf.constant(10) b = tf.constant(32) sess.run(a + b) 42 sess.close()
(which I found through https://github.com/tensorflow/tensorflow)
I received the following message:
2017-11-02 01:56:21.698935: I C:tf_jenkinshomeworkspacerel-winMwindowsPY36tensorflowcoreplatformcpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
But when I ran
import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello))
it ran as it should and output
Hello, TensorFlow!, which indicates that the installation was successful indeed but there is something else that is wrong.
Do you know what the problem is and how to fix it?
What is this warning about?
Modern CPUs provide a lot of low-level instructions, besides the usual arithmetic and logic, known as extensions, e.g. SSE2, SSE4, AVX, etc. From the Wikipedia:
Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later on by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme.
In particular, AVX introduces fused multiply-accumulate (FMA) operations, which speed up linear algebra computation, namely dot-product, matrix multiply, convolution, etc. Almost every machine-learning training involves a great deal of these operations, hence will be faster on a CPU that supports AVX and FMA (up to 300%). The warning states that your CPU does support AVX (hooray!).
I’d like to stress here: it’s all about CPU only.
Why isn’t it used then?
Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from
pip install tensorflow) are intended to be compatible with as many CPUs as possible. Another argument is that even with these extensions CPU is a lot slower than a GPU, and it’s expected for medium- and large-scale machine-learning training to be performed on a GPU.
What should you do?
If you have a GPU, you shouldn’t care about AVX support, because most expensive ops will be dispatched on a GPU device (unless explicitly set not to). In this case, you can simply ignore this warning by
# Just disables the warning, doesn't enable AVX/FMA import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
… or by setting
export TF_CPP_MIN_LOG_LEVEL=2 if you’re on Unix. Tensorflow is working fine anyway, but you won’t see these annoying warnings.
If you don’t have a GPU and want to utilize CPU as much as possible, you should build tensorflow from the source optimized for your CPU with AVX, AVX2, and FMA enabled if your CPU supports them. It’s been discussed in this question and also this GitHub issue. Tensorflow uses an ad-hoc build system called bazel and building it is not that trivial, but is certainly doable. After this, not only will the warning disappear, tensorflow performance should also improve.
Update the tensorflow binary for your CPU & OS using this command
pip install --ignore-installed --upgrade "Download URL"
The download url of the whl file can be found here
CPU optimization with GPU
There are performance gains you can get by installing TensorFlow from the source even if you have a GPU and use it for training and inference. The reason is that some TF operations only have CPU implementation and cannot run on your GPU.
Also, there are some performance enhancement tips that makes good use of your CPU. TensorFlow’s performance guide recommends the following:
Placing input pipeline operations on the CPU can significantly improve performance. Utilizing the CPU for the input pipeline frees the GPU to focus on training.
For best performance, you should write your code to utilize your CPU and GPU to work in tandem, and not dump it all on your GPU if you have one. Having your TensorFlow binaries optimized for your CPU could pay off hours of saved running time and you have to do it once.
For Windows, you can check the official Intel MKL optimization for TensorFlow wheels that are compiled with AVX2. This solution speeds up my inference ~x3.
conda install tensorflow-mkl
For Windows (Thanks to the owner f040225), go to here: https://github.com/fo40225/tensorflow-windows-wheel to fetch the url for your environment based on the combination of “tf + python + cpu_instruction_extension”. Then use this cmd to install:
pip install --ignore-installed --upgrade "URL"
If you encounter the “File is not a zip file” error, download the .whl to your local computer, and use this cmd to install:
pip install --ignore-installed --upgrade /path/target.whl
If you use pip version of tensorflow, it means it’s already compiled and you are just installing it. Basically you install tensorflow-gpu, but when you download it from repository and trying to build, you should build it with CPU AVX support. If you ignore it, you will get the warning every time when you run on cpu.
The easiest way that I found to fix this is to uninstall everything then install a specific version of tensorflow-gpu:
- uninstall tensorflow:
pip uninstall tensorflow
- uninstall tensorflow-gpu: (make sure to run this even if you are not sure if you installed it)
pip uninstall tensorflow-gpu
- Install specific tensorflow-gpu version:
pip install tensorflow-gpu==2.0.0 pip install tensorflow_hub pip install tensorflow_datasets
You can check if this worked by adding the following code into a python file:
from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds print("Version: ", tf.__version__) print("Eager mode: ", tf.executing_eagerly()) print("Hub Version: ", hub.__version__) print("GPU is", "available" if tf.config.experimental.list_physical_devices("GPU") else "NOT AVAILABLE")
Run the file and then the output should be something like this:
Version: 2.0.0 Eager mode: True Hub Version: 0.7.0 GPU is available
Hope this helps
What worked for me tho is this library
Install this library and do as instructed on the page, it works like a charm!