During this project we will implement a software system that will use RGB-D
Cameras (Kinects) working simultaneously to capture 3D objects in movement in
real time. In particular, we are interested in capturing humans in movement.
During the project we will build a system that records video and depth maps,
from different view points simultaneously, and merge the information of all
view points into a coherent 3D model. In order to build this process we will
learn, build and research about the following topics. Students (all or some,
depending on how many students) will have access to the Kinect camera.
\- Camera Calibration and synchronization
\- Pointcloud registration, cleaning, processing...
\- Meshing and textuing
\- Geometry and volumetric data processing
\- Efficient 3D video representation
\- 3D user interfaces
We aim to divide the system in small groups to work on different areas,
depending on knowledge, skills and interests. The course is thought os it is
interactive, taking design, development, and organizational decisions as a
group. Studends are expected to dedicate between 2 and 4 hours per week
working to the project besides the lecturing hours.
We will use the lecturing hours to explain the necessary concepts for the
project, discuss progress, and possible solutions for the different probles to
solve.
The project will involve to do some research and development, as well as the
usage of open source libraries used , such as OpenVDB, PCL, and opencv.