c++ - how do I re-project points in a camera - projector system (after calibration) -


i have seen many blog entries , videos , source coude on internet how carry out camera + projector calibration using opencv, in order produce camera.yml, projector.yml , projectorextrinsics.yml files.

i have yet see discussing files afterwards. indeed have done calibration myself, don't know next step in own application.

say write application uses camera - projector system calibrated track objects , project on them. use contourfind() grab points of interest moving objects , want project these points (from projector!) onto objects!

what want (for example) track centre of mass (com) of object , show point on camera view of tracked object (at com). point should projected on com of object in real time.

it seems projectpoints() opencv function should use after loading yml files, not sure how account intrinsic & extrinsic calibration values of both camera , projector. namely, projectpoints() requires parameters

  • vector of points re-project (duh!)
  • rotation + translation matrices. think can use projectorextrinsics here. or can use composert() function generate final rotation & final translation matrix projectorextrinsics (which have in yml file) , cameraextrinsics (which don't have. side question: should not save them in file??).
  • intrinsics matrix. tricky now. should use camera or projector intrinsics matrix here?
  • distortion coefficients. again should use projector or camera coefs here?
  • other params...

so if use either projector or camera (which one??) intrinsics + coeffs in projectpoints(), 'correcting' 1 of 2 instruments . / how use other's instruments intrinsics ??

what else need use apart load() yml files , projectpoints() ? (perhaps undistortion?)

any on matter appreciated . if there tutorial or book (no, o'reilly "learning opencv" not talk how use calibration yml files either! - how actual calibration), please point me in direction. don't need exact answer!

first, seem confused general role of camera/projector model: role map 3d world points 2d image points. sounds obvious, means given extrinsics r,t (for orientation , position), distortion function d(.) , intrisics k, can infer for particular camera 2d projection m of 3d point m follows: m = k.d(r.m+t). projectpoints function (i.e. 3d 2d projection), each input 3d point, hence need give the input parameters associated camera in want 3d points projected (projector k&d if want projector 2d coordinates, camera k&d if want camera 2d coordinates).

second, when jointly calibrate camera , projector, not estimate set of extrinsics r,t camera , projector, 1 r , 1 t, represent rotation , translation between camera's , projector's coordinate systems. instance, means camera assumed have rotation = identity , translation = zero, , projector has rotation = r , translation = t (or other way around, depending on how did calibration).

now, concerning application mentioned, real problem is: how estimate 3d coordinates of given point ?

using two cameras , 1 projector, easy: track objects of interest in 2 camera images, triangulate 3d positions using 2 2d projections using function triangulatepoints , project 3d point in projector 2d coordinates using projectpoints in order know display things projector.

with 1 camera , 1 projector, still possible more difficult because cannot triangulate tracked points 1 observation. basic idea approach problem sparse stereo disparity estimation problem. possible method follows:

  1. project non-ambiguous image (e.g. black , white noise) using projector, in order texture scene observed camera.

  2. as before, track objects of interest in camera image

  3. for each object of interest, correlate small window around location in camera image projector image, in order find projects in projector 2d coordinates

another approach, unlike 1 above use calibration parameters, dense 3d reconstruction using stereorectify , stereobm::operator() (or gpu::stereobm_gpu::operator() gpu implementation), map tracked 2d positions 3d using estimated scene depth, , project projector using projectpoints.

anyhow, easier, , more accurate, using 2 cameras.

hope helps.


Comments

Popular posts from this blog

Android layout hidden on keyboard show -

google app engine - 403 Forbidden POST - Flask WTForms -

c - Why would PK11_GenerateRandom() return an error -8023? -