An error occurred when compiling “process.py” on the above site.
python tools/process.py --input_dir data -- operation resize --outp ut_dir data2/resize data/0.jpg -> data2/resize/0.png
Traceback (most recent call last):
File "tools/process.py", line 235, in <module> main() File "tools/process.py", line 167, in main src = load(src_path) File "tools/process.py", line 113, in load contents = open(path).read() File"/home/user/anaconda3/envs/tensorflow_2/lib/python3.5/codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
What is the cause of the error? Python’s version is 3.5.2.
Python tries to convert a byte-array (a
bytes which it assumes to be a utf-8-encoded string) to a unicode string (
str). This process of course is a decoding according to utf-8 rules. When it tries this, it encounters a byte sequence which is not allowed in utf-8-encoded strings (namely this 0xff at position 0).
Since you did not provide any code we could look at, we only could guess on the rest.
From the stack trace we can assume that the triggering action was the reading from a file (
contents = open(path).read()). I propose to recode this in a fashion like this:
with open(path, 'rb') as f: contents = f.read()
b in the mode specifier in the
open() states that the file shall be treated as binary, so
contents will remain a
bytes. No decoding attempt will happen this way.
Use this solution it will strip out (ignore) the characters and return the string without them. Only use this if your need is to strip them not convert them.
with open(path, encoding="utf8", errors='ignore') as f:
You’ll just lose some characters. but if your don’t care about them as they seem to be extra characters originating from a the bad formatting and programming of the clients connecting to my socket server.
Then its a easy direct solution.
Use encoding format ISO-8859-1 to solve the issue.
Had an issue similar to this, Ended up using UTF-16 to decode. my code is below.
with open(path_to_file,'rb') as f: contents = f.read() contents = contents.rstrip("n").decode("utf-16") contents = contents.split("rn")
this would take the file contents as an import, but it would return the code in UTF format. from there it would be decoded and seperated by lines.
I’ve come across this thread when suffering the same error, after doing some research I can confirm, this is an error that happens when you try to decode a UTF-16 file with UTF-8.
With UTF-16 the first characther (2 bytes in UTF-16) is a Byte Order Mark (BOM), which is used as a decoding hint and doesn’t appear as a character in the decoded string. This means the first byte will be either FE or FF and the second, the other.
Heavily edited after I found out the real answer
If you are on a mac check if you for a hidden file, .DS_Store. After removing the file my program worked.
It simply means that one chose the wrong encoding to read the file.
On Mac, use
file -I file.txt to find the correct encoding. On Linux, use
file -i file.txt.
if you are receiving data from a serial port, make sure you are using the right baudrate (and the other configs ) : decoding using (utf-8) but the wrong config will generate the same error
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xff in position 0: invalid start byte
to check your serial port config on linux use :
stty -F /dev/ttyUSBX -a
Check the path of the file to be read. My code kept on giving me errors until I changed the path name to present working directory. The error was:
newchars, decodedbytes = self.decode(data, self.errors) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
I had a similar issue and searched all the internet for this problem
if you have this problem just copy your HTML code in a new HTML file and use the normal
and it will work….
just create a new HTML file in the same location and use a different name
You have to use the encoding as latin1 to read this file as there are some special character in this file, use the below code snippet to read the file.
The problem here is the encoding type. When Python can’t convert the data to be read, it gives an error.
You can you latin1 or other encoding values.
I say try and test to find the right one for your dataset.
I had a similar issue with PNG files. and I tried the solutions above without success. this one worked for me in python 3.8
with open(path, "rb") as f:
This is due to the different encoding method when read the file. In python, it defaultly encode the data with unicode. However, it may not works in various platforms.
I propose an encoding method which can help you solve this if ‘utf-8’ not works.
with open(path, newline='', encoding='cp1252') as csvfile: reader = csv.reader(csvfile)
It should works if you change the encoding method here. Also, you can find other encoding method here standard-encodings , if above doesn’t work for you.
I had a similar problem.
Solved it by:
import io with io.open(filename, 'r', encoding='utf-8') as fn: lines = fn.readlines()
However, I had another problem. Some html files (in my case) were not utf-8, so I received a similar error. When I excluded those html files, everything worked smoothly.
So, except from fixing the code, check also the files you are reading from, maybe there is an incompatibility there indeed.
I have the same issue when processing a file generated from Linux. It turns out it was related with files containing question marks..
I have a similar problem. I try to run an example in tensorflow/models/objective_detection and met the same message. Try to change Python3 to Python2
If possible, open the file in a text editor and try to change the encoding to UTF-8. Otherwise do it programatically at the OS level.