I want to display the image of a vision sensor using python ZMQ remote API, the following code seems to works, but it makes Coppelia simulation run for about 1 second and then freeze for 2 seconds and then run for 1 second, etc...
Am I missing something to make it run more smoothly?
And is there a better way to show the image without using all the transformations I applied to the image?
from time import sleep
import time
import zmqRemoteApi
from zmqRemoteApi import RemoteAPIClient
import numpy as np
from PIL import Image as I
import cv2
# create a client to connect to zmqRemoteApi server:
# (creation arguments can specify different host/port,
# defaults are host='localhost', port=23000)
client = RemoteAPIClient('localhost',23000)
# get a remote object:
sim = client.getObject('sim')
sensor1Handle=sim.getObjectHandle('VisionSensor')
sim.startSimulation()
resolution = sim.getVisionSensorResolution(sensor1Handle)
sleep(1)
while True:
image=sim.getVisionSensorImage(sensor1Handle)
img = np.array(image)*255
img = img.astype(np.uint8)
img.resize([resolution[0],resolution[1],3])
img = cv2.flip(img[...,::-1],0)
cv2.imshow('image',img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
sim.stopSimulation()
yes, you should use sim.getVisionSensorCharImage instead: it will return a buffer, instead of a table/arrays, which can be slow. E.g. (from one of the examples contained in the zmqRemoteApi repository:
img, resX, resY = sim.getVisionSensorCharImage(visionSensorHandle)
img = np.frombuffer(img, dtype=np.uint8).reshape(resY, resX, 3)
# In CoppeliaSim images are left to right (x-axis), and bottom to top (y-axis)
# (consistent with the axes of vision sensors, pointing Z outwards, Y up)
# and color format is RGB triplets, whereas OpenCV uses BGR:
img = cv2.flip(cv2.cvtColor(img, cv2.COLOR_BGR2RGB), 0)
cv2.imshow('', img)
cv2.waitKey(0)
...