Question
I have found [How to return a numpy array as an image using FastAPI?](https://stackoverflow.com/questions/65903231/return-numpy-array-as- image-from-fastapi), however, I am still struggling to show the image, which appears just as a white square.
I read an array into io.BytesIO
like so:
def iterarray(array):
output = io.BytesIO()
np.savez(output, array)
yield output.get_value()
In my endpoint, my return is StreamingResponse(iterarray(), media_type='application/octet-stream')
When I leave the media_type
blank to be inferred a zipfile is downloaded.
How do I get the array to be displayed as an image?
Answer
Option 1 - Return image as bytes
The below examples show how to convert an image loaded from disk, or an in-
memory image (in the form of numpy array), into bytes (using either PIL
or
OpenCV
libraries) and return them using a custom
Response
.
For the purposes of this demo, the below code is used to create the in-memory
sample image (numpy array), which is based on this
answer.
# Function to create a sample RGB image
def create_img():
w, h = 512, 512
arr = np.zeros((h, w, 3), dtype=np.uint8)
arr[0:256, 0:256] = [255, 0, 0] # red patch in upper left
return arr
Using PIL
Server side:
You can load an image from disk using Image.open
, or use
Image.fromarray
to load an in-memory image ( Note : For demo purposes, when the case is
loading the image from disk, the below demonstrates that operation inside the
route. However, if the same image is going to be served multiple times, one
could load the image only once at startup
and store it on the app
instance,
as described in this answer).
Next, write the image to a buffered stream, i.e.,
BytesIO
, and use the
getvalue()
method to get the entire contents of the buffer. Even though the buffered
stream is garbage collected when goes out of scope, it is generally better to
call close()
or use the with
statement, as shown
here and below.
from fastapi import Response
from PIL import Image
import numpy as np
import io
@app.get('/image', response_class=Response)
def get_image():
# loading image from disk
# im = Image.open('test.png')
# using an in-memory image
arr = create_img()
im = Image.fromarray(arr)
# save image to an in-memory bytes buffer
with io.BytesIO() as buf:
im.save(buf, format='PNG')
im_bytes = buf.getvalue()
headers = {'Content-Disposition': 'inline; filename="test.png"'}
return Response(im_bytes, headers=headers, media_type='image/png')
Client side:
The below demonstrates how to send a request to the above endpoint using
Python requests module, and write the received bytes to a file, or convert the
bytes back into PIL Image
, as described
here.
import requests
from PIL import Image
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
# write raw bytes to file
with open('test.png', 'wb') as f:
f.write(r.content)
# or, convert back to PIL Image
# im = Image.open(io.BytesIO(r.content))
# im.save('test.png')
Using OpenCV
Server side:
You can load an image from disk using
cv2.imread()
function, or use an in-memory image, which—if it is in RGB
order, as in the
example below—needs to be converted, as OpenCV uses BGR
as its default
colour order for images. Next,
use
cv2.imencode()
function, which compresses the image data (based on the file extension you
pass that defines the output format, i.e., .png
, .jpg
, etc.) and stores it
in an in-memory buffer that is used to transfer the data over the network.
import cv2
@app.get('/image', response_class=Response)
def get_image():
# loading image from disk
# arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED)
# using an in-memory image
arr = create_img()
arr = cv2.cvtColor(arr, cv2.COLOR_RGB2BGR)
# arr = cv2.cvtColor(arr, cv2.COLOR_RGBA2BGRA) # if dealing with 4-channel RGBA (transparent) image
success, im = cv2.imencode('.png', arr)
headers = {'Content-Disposition': 'inline; filename="test.png"'}
return Response(im.tobytes() , headers=headers, media_type='image/png')
Client side:
On client side, you can write the raw bytes to a file, or use the
[numpy.frombuffer()
](https://numpy.org/doc/stable/reference/generated/numpy.frombuffer.html#numpy-
frombuffer) function and
cv2.imdecode()
function to decompress the buffer into an image format (similar to
this)—cv2.imdecode()
does
not require a file extension, as the correct codec will be deduced from the
first bytes of the compressed image in the buffer.
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
# write raw bytes to file
with open('test.png', 'wb') as f:
f.write(r.content)
# or, convert back to image format
# arr = np.frombuffer(r.content, np.uint8)
# img_np = cv2.imdecode(arr, cv2.IMREAD_UNCHANGED)
# cv2.imwrite('test.png', img_np)
Useful Information
Since you noted that you would like the image displayed similar to a
[FileResponse
](https://fastapi.tiangolo.com/advanced/custom-
response/#fileresponse), using a custom
Response
to return the bytes should be the way to do this, instead of using
[StreamingResponse
](https://fastapi.tiangolo.com/advanced/custom-
response/#streamingresponse) (as shown in your question). To indicate that the
image should be viewed in the browser, the HTTP
response should include the
following header, as described
here and as shown in the
above examples (the quotes around the filename
are required, if the
filename
contains special characters):
headers = {'Content-Disposition': 'inline; filename="test.png"'}
Whereas, to have the image downloaded rather than viewed (use attachment
instead):
headers = {'Content-Disposition': 'attachment; filename="test.png"'}
If you would like to display (or download) the image using a JavaScript interface, such as Fetch API or Axios, have a look at the answers here and here.
As for the [StreamingResponse
](https://fastapi.tiangolo.com/advanced/custom-
response/#streamingresponse), if the numpy array is fully loaded into memory
from the beginning, StreamingResponse
is not necessary at all.
StreamingResponse
streams by iterating over the chunks provided by your
iter()
function (if Content-Length
is not set in the headers—unlike
StreamingResponse
, other Response
classes set that header for you, so that
the browser will know where the data ends). As described in this
answer:
Chunked transfer encoding makes sense when you don't know the size of your output ahead of time, and you don't want to wait to collect it all to find out before you start sending it to the client. That can apply to stuff like serving the results of slow database queries, but it doesn't generally apply to serving images.
Even if you would like to stream an image file that is saved on disk (which
you should rather not, unless it is a rather large file that can't fit into
memory. Instead, you should use use
[FileResponse
](https://fastapi.tiangolo.com/advanced/custom-
response/#fileresponse)), file-
like objects,
such as those created by open()
, are normal iterators; thus, you can return
them directly in a StreamingResponse
, as described in the
[documentation](https://fastapi.tiangolo.com/advanced/custom-response/#using-
streamingresponse-with-file-like-objects) and as shown below (if you find
yield from f
being rather slow when using StreamingResponse
, please have a
look at [this answer](https://www.starlette.io/applications/#accessing-the-
app-instance) for solutions):
@app.get('/image')
def get_image():
def iterfile():
with open('test.png', mode='rb') as f:
yield from f
return StreamingResponse(iterfile(), media_type='image/png')
or, if the image was loaded into memory instead, and was then saved into a
BytesIO
buffered
stream in order to return the bytes, BytesIO
is a file-
like object
(like all the concrete classes of io
module), which means you could
return it directly in a StreamingResponse
:
from fastapi import BackgroundTasks
@app.get('/image')
def get_image(background_tasks: BackgroundTasks):
arr = create_img()
im = Image.fromarray(arr)
buf = BytesIO()
im.save(buf, format='PNG')
buf.seek(0)
background_tasks.add_task(buf.close)
return StreamingResponse(buf, media_type='image/png')
Thus, for your case scenario, it is best to return a
Response
with your custom content
and
[media_type
](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Basics_of_HTTP/MIME_types), as well as setting the [Content- Disposition
](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Content-Disposition) header, as described above, so
that the image is viewed in the browser.
Option 2 - Return image as JSON-encoded numpy array
The below should not be used for displaying the image in the browser, but it
is rather added here for the sake of completeness, showing how to convert an
image into a numpy array (preferably, using asarray()
function), then return the data in
JSON format, and finally,
convert the data back to image on client side, as described in
this and
this answer. For faster
alternatives to the standard Python json
library, see this
answer.
Using PIL
Server side:
from PIL import Image
import numpy as np
import json
@app.get('/image')
def get_image():
im = Image.open('test.png')
# im = Image.open('test.png').convert('RGBA') # if dealing with 4-channel RGBA (transparent) image
arr = np.asarray(im)
return json.dumps(arr.tolist())
Client side:
import requests
from PIL import Image
import numpy as np
import json
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
arr = np.asarray(json.loads(r.json())).astype(np.uint8)
im = Image.fromarray(arr)
im.save('test_received.png')
Using OpenCV
Server side:
import cv2
import json
@app.get('/image')
def get_image():
arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED)
return json.dumps(arr.tolist())
Client side:
import requests
import numpy as np
import cv2
import json
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
arr = np.asarray(json.loads(r.json())).astype(np.uint8)
cv2.imwrite('test_received.png', arr)