One of the most common and used robot in industrial applications is the robotic arm.
Considering the wide use in industries and the fact that the development of a robotic arm covers
most of the concepts of robotics, my colleagues and I have decided to develop our project on this topic.
Our work considers the whole process needed to obtain the final robot. For this purpose, we have realized a series of operations that start with the 3D design and finish with the integration of the camera vision.
The mechanical design is a fundamental step in our project. For this purpose, we have used
Solidworks, a suitable 3D CAD that allows us to build and modify the functional parts. The design flow starts by building the base, where we can collocate the motor in order to make
possible the z-rotation. Inside this part, there is a proper casing for placing the motor
who is fixed through four screws. The second link has a simple and functional design. According to the maximum specifications about
3D printing, this link is 12 cm and it contains some holes at extremities in order to fix two
motors. One of the most detailed and careful mechanical elements is the third link. First of all, it has to be
connected with its motor and the second link in order to get a proper rotation. The second functional task is to connect the end-effector through an
additional element. The interchangeability is the main feature. In fact, we have designed this part with the central hole
in such a way that we could have easily integrated the wrist rotation afterwards, without changing
the entire design. The last part is the end-effector, that is composed of 10 sub-elements needed for the correct
movement. Now, the printing phase can start.
So, the 3D model has been exported in a file .stl and printed by a 3D printer.
The electric part is mainly related to the servo motors used in the manipulator. The
main issue is designing a circuit to supply them and, at the same time, suitable to allow Arduino to
control them.
We deal with four servo motors, three for the joints (with the same characteristics) and one for the
end effector. Servo motor component has three input pins: two of them related
to the supply and the other one related to the control, directly connected to Arduino through a
jumper wire. At this point, we have one main issue: implementing the reported circuit with the components that are available.
● Two lines for servo motors power supply are set on the breadboard, using two jumper wires:
one is referred to 5V and one is referred to ground.
● The logic wire of the servo motor is linked to the Arduino board through a jumper wire.
● Voltage and current supply for the servo motors are provided by the external supply (AC/DC
converter), guaranteeing 5V and 2A. Actually, Arduino is capable to provide 5V on its own
and this is enough for the servo motors but it can’t provide the 2A the components need.
● No capacitors and voltage adapters are used in the real circuit because
the power supply already provides the right values;
● In the real circuit the power supply is not linked to the Arduino board, but it is directly
connected to the breadboard through an adapter.
In order to make the robot autonomous we needed to grant it eyes: a sensor able to gather
information from the workspace and communicate it to the robot.
To this end, we chose to use a webcam that produces digital 2D projections of the environment,
then to filter from these frames only a specific colour that corresponds to the target’s, detecting its
position in pixel in the projection.
The results are also shown on screen to make us see what the robot sees. The chosen operating system is Ubuntu 18.04 and a simple USB webcam has been used. We did not have specific requirements on the
quality and frequency of the images, but we needed a good enough field of view to
guarantee that both the designed solutions could cover all the workspace.
At first, a very small camera was to be put in the robot’s arm, “eye in hand”. Weight
and size were critical to avoid structural and dynamical problems during the
movement.
Then, we resorted to a fixed external camera with no constraints whatsoever, “eye
to hand”.
Both cameras could be connected to a Ubuntu system and detected by drivers, either directly or
downloaded manually. An important step is to align the camera to be parallel to the workspace ground plane with attention
to where the central point would fall.
Since the detection is done by filtering a specific colour of the image, a Python script has been written
to simply extract the required range numbers with the mouse cursor on the captured images.
Furthermore, we have chosen colour format type is HSV (Hue, Saturation, Value) that is the most accurate and readable. To increase the accuracy of the detection from the area of masked pixels, we implemented script who
computes the centre through 2D integral moments.
This centre point value is then reported as a X and Y value of pixel with respect to the centre of the
frame.
After printing all the components of the robot they are assembled together.
First of all, we have gathered all the main parts as seen in the figure below. Subsequently, after having set all the servomotors to zero degrees position, we mounted them in
the links. This procedure was carried out for each motor to carefully align all the axes and the
references with the links, taking care of the joint position limits.
For fixing the motors we used the screws present in the accessories of the motor, while for fixing
the base, the link and the gripper different small items were used.
The final result obtained was very satisfying for us because our robot succeeded in his primary
objective. In fact, he can autonomously detect the object, pick and place it in a final target position.
As we expected in the beginning of our development, this project allowed us to deepen and better
understand the theoretical knowledge. Moreover, the obstacles that we have
encountered in the evolution of our work allowed us to analyse practical problems to be solved
either by researching in our theoretical sources or developing our custom solution with some
creativity.
Additionally, in this conclusion we wanted to express the possible improvements and the future
developments that can be realized starting from our project.
First, considering the improvements, the most critical and limiting part of our work are the
actuators. In fact, the motors used are cost-effective, but their quality is very low because their max
torque is too small, they don’t cover the full range between 0 and pi and they are just controlled in
position without any feedback. Using higher quality motors, the overall performance of the robot
would improve a lot and some other feature could be designed and realized such as the trajectory
planning and control law.
Another problem that can be tackled is about the calibration. This is a very difficult task to realize
well because we use an external camera to detect the object. Thus, the vision of the robot depends a
lot on the position of the camera and its orientation with respect to the chosen fixed frame. This
means that if the camera or the robot accidentally move from the initial position it will be not
possible to correctly detect the object’s position and do pick and place operation.
Still about calibration, real measurements of the robot’s DH parameters had some uncertainties, due
to the assembly of all pieces that had non-ideal positions, and also the problems related to the
alignment of the motor’s zero position with its corresponding link.
In order to solve these problems and to realize a more powerful and effective robot, some possible
future developments are reported.
First of all, using more powerful motors trajectory planning can be considered. This is an important
feature because it would give a smoother, not impulsive, movement and more control on
mechanical vibrations and bounces.
Second, the vision control can be realized through an “eye in hand” technique, installing a small
camera in the robot’s arm. From a mechanical point of view, our 3D design already considers this
option, but what must be developed is the image processing that allows to compute the position of
the object from a vision that is not parallel to the plane where the object is.
Finally, what can be developed is an algorithm of image detection that is independent from the colour
of the object. This can be realized, for example, with a machine learning algorithm that, after some
training, automatically recognizes the object independently from its colour.