You are on page 1of 4

O Maryjo bez grzechu pierworodnego poczęta, módl się za nami, którzy się do Ciebie

uciekamy. I za tymi, którzy się do Ciebie nie uciekają, a zwłaszcza za nieprzyjaciółmi


Kościoła Świętego i poleconymi Tobie. Amen.
*
Królowo Szkaplerza Świętego, weź mnie pod Twoją opiekę.
Pani od Arki, Maryjo, Matko nasza, Wspomożenie
Wiernych, módl się za nami.
*
Maryjo, Matko Miłosierdzia i Współczucia, Współodkupicielko i Pośredniczko
Wszelkich Łask, módl się za nami.
*
Niepokalana Lilio Trójcy Przenajświętszej, módl się za nami

1
Good morning.
Ladies and gentlemen's.
Lets me introduce myself.
My name is Mariusz Jacewicz.
I would like to present a concept and results of testing a vision system for satellite
proximity operations. The system uses natural features on the satellite surfaces and does
not require any artificial markers or targets for its operation. Autonomous rendezvous
and docking is necessary for planned space programs. Estimation of the relative pose
between the host platform and a resident space object is a critical ability.
2
My talk is divided into 6 parts. I’ll start with telling you why randezvous is important. At
second I will tell you about vision navigation and algorithm. Next I will show results and
finally I will tell you about current state of our work. I will be glad to answer any
questions that you may have at the end.
3
Satellite servicing includes conducting repairs, upgrading and refuelling spacecraft on-
orbit. It offers a potential for extending the life of satellites and reducing the launch and
operating costs. The high cost of manned missions, however, limits the servicing to the
most expensive spacecraft only. Also, many communication satellites reside on high Geo
Stationary orbits that manned spacecraft cannot reach. So, in next years autonomous
space rendezvous will be more and more important.
4
Next, let's me describe the problem in details. We have two satellites. The first one is
servicing satellite called chaser and it is the active one. The second is target satellite,
which is assumed to be passive and uncooperative. The passive spacecraft is out of
control and rotates freely in space. Our goal is to join them in one rigid spacecraft. The
servicers will approach the targets in several steps by reaching predefined waypoints and
typically using minimum energy transfers. For us only last phases of rendezvous are
important. All operations should be performed autonomously and with minimal
expenditure of fuel by the servicing satellite and with a high accuracy of touchdown.
Data processed by the vision systems may be obtained from a variety of sensors such as
video cameras, scanning and non-scanning rangefinders. Scanning rangefinders are
independent of the ambient and less sensitive to sunlight. This comes at a cost/weight
and energy requirement of complex scanning mechanisms that may not survive the
space environment well. Non-scanning rangefinders offer a potential of low
cost/mass/energy and the absence of scanning mechanisms. However, they are still in
early development phase. The video cameras commonly used in current space
operations are available at a relatively low cost, have low mass and energy
requirements. Their main disadvantage is their dependence on ambient illumination and
sensitivity to direct sunlight. But it was assumed that camera will be the best option for
our project.
5
So, if we talk about cameras it is worth nothing that in vision community this problem is
more generally known as the pose determination problem. As the satellite can be
positioned with 3 translational and 3 rotational degrees of freedom the search space is
of a high dimension and computational efficiency is a major concern. That the satellite is
isolated in the scene simplifies the problem.
6
So, if we talk about cameras it is worth nothing that in vision community this problem is
more generally known as the pose determination problem. As the satellite can be
positioned with 3 translational and 3 rotational degrees of freedom the search space is
of a high dimension and computational efficiency is a major concern. That the satellite is
isolated in the scene simplifies the problem.

7
So, it was assumed simple pinhole camera model. Small letter p is image feature point,
big P is model point. We have N points on the image and M points in model.
8
On this slide you can see a flowchart of our system. The first step is image acquisition.
Next using Scale Invariant Feature Transform individual features on the image 2D are
detected. Next they are matching with corresponding 3D model features.

9
Next I would like to describe POSIT algorithm. We need to know rotation matrix and
translation vector. SO, for each point from model we can assign w.
10
From this formulas we can calculate squared distances which are necessary for
minimization of function.
11
Next , when we minimize this function we can calculate s parameter and first and second
rotation from, which is first ad second row from rotation matrix. Next, we can calculate
also last rotation. When we know rotations we can calculate also translations.
13
On this slide you see specification of camera which was used in the experiment. It is
standard, low cost camera. The vision system was tested in the tracking mode using
various calibrated image sequences.
14
On this slide I will describe experiments which were made. Experiments were tested in
Matlab software.
It was used a set of photos of real object. Experiments were conducted on various image
sequences and show advantages of the chosen approach. The main goal of experiments
was to obtain the measurements of position and orientation of object and check how
accurate is algorithm. Others goals were to confirm the robustness to varying occluding
conditions. Experiments were tested as follow. It was used a plane with example image,
which was mounted on mounting stand, which can be precisely translated and rotated
due to camera coordinate frame. The object was placed on calibration grid so measured
parameters were compared with proposed methods. Typical object was shown on image
below. It was used low cost camera. The source of light was fixed. It was provided
uniform illumination for all the positions of tested object. There were measured of
position and one rotational degree of freedom on ground. Next the result of ground
truth measurement were compared with calculated results. It was expected that ground
truth measurements should be very close to calculated results. Expected translation
error should be below a few millimeters. Similar, rotational error should be under 4
degrees.
These internal parameters can be estimated during an offline camera calibration stage.
Camera was calibrated the using Caltech calibration method. The motion pattern is
visible in the figure. The ground truth error varies around 1mm in space. The six plots
presents results for a first chosen example. The camera has a distance of approximately
220 mm to the object. At the beginning the object is not moving. Next the object is
moving manually till second 37. The object was constrained and not moved along x and
y axes. On the x axes of first three plots there is given time in seconds and on y axes the
measured translation in millimeters. On the next plots on x axes there are given, similar
as upper figures, time and on y orientation in degrees. Green line shows ground truth
and blue line shows the measurements.
15
Next, there was conducted second experiment. Object was moved but in other manner
as in first experiment. The object was moved mainly along z axis, from position far to
closer from camera.
Similar as in the first case there are presented six plots. First three presents linear
translations along axes of camera coordinate system and three presents angular
orientation of object. First plot presents linear translation along x axis. There is
significant error between both measurements. At the end of simulation difference is
about 100mm. This is caused by mounting of camera. There were small motions caused
by imperfectly camera to ground. On the second plot both lines green and blue are very
close because object was constrained on y axis and could not pitch and yaw. On the next
three plots there are presented measurements for angular orientation of object. For roll
motion there is error about 2 degrees. Very similar results were obtained for pitch. For
both vision based measurements there is a peak in 6 second. The sixth plot presents
yaw. In this case error is 5 mm.
The experiments take about 13ms per frame on modern CPU.
16
It was shown that introduced method is able to run in real time. The camera bed should
be more precisely screwed because it introduced serious problems.
17
Additional information about our work and conducted experiments can be found in a
paper.

You might also like