Tracking Object on Conveyor Belt with sim.rmlStep
Posted: 16 Jan 2019, 00:26
Hi,
I am trying to have a manipulator follow a cube on a conveyor belt. I have looked at the 'blobDetectionWithPickAndPlace' demo, but I am controlling my arm through a similar way that the Kinovas are controlled in the 'motionPlanningAndGraspingDemo' (using sim.rmlStep rather than sim.rmlMoveToPosition). The reason I do this is because I want to be able to step the environment from Python; hence why I am using the sim.rmlPos and sim.rmlStep function.
I have tried to replicate what was done in 'blobDetectionWithPickAndPlace', which in this case is simple because you can just work out where the object will be after a certain number of timesteps and then use sim.rmlMoveToPosition (setting the velocity to match that of the conveyor). But with the simplifying assumptions made for the planning with the Kinovas in 'motionPlanningAndGraspingDemo' (you talked about it some in this post), it becomes more difficult because we are working with 1 dimensional rml.
So 2 questions:
- How would you deal with it in this case?
- Has there been any updates since that forum post I shared? i.e. the RML functions for N DoFs of the robot.
Thanks!
I am trying to have a manipulator follow a cube on a conveyor belt. I have looked at the 'blobDetectionWithPickAndPlace' demo, but I am controlling my arm through a similar way that the Kinovas are controlled in the 'motionPlanningAndGraspingDemo' (using sim.rmlStep rather than sim.rmlMoveToPosition). The reason I do this is because I want to be able to step the environment from Python; hence why I am using the sim.rmlPos and sim.rmlStep function.
I have tried to replicate what was done in 'blobDetectionWithPickAndPlace', which in this case is simple because you can just work out where the object will be after a certain number of timesteps and then use sim.rmlMoveToPosition (setting the velocity to match that of the conveyor). But with the simplifying assumptions made for the planning with the Kinovas in 'motionPlanningAndGraspingDemo' (you talked about it some in this post), it becomes more difficult because we are working with 1 dimensional rml.
So 2 questions:
- How would you deal with it in this case?
- Has there been any updates since that forum post I shared? i.e. the RML functions for N DoFs of the robot.
Thanks!