OpenCV example, and why does Google do so poorly?

Take searching for cvGetSpatialMoment:;=cvGetSpatialMoment&btnG;=Google+Search&aq;=f&oq;=

All the top results are nearly useless, just code that doesn't help much if you don't know what cvGetSpatialMoment does.

The "CV Reference Manual" that comes with an install of OpenCV probably should come up first (the local html files of course aren't google searchable), or any real text explanation or tutorial of the function. So scrolling down further there are some odd but useful sites like I guess the official Willow Garage docs here haven't been linked to enough.

The official OpenCV book on Google is highly searchable, some pages are restricted but many are not.

Through all that frustration I did manage to learn a lot of basics to load an image and process a portion of the image to look for a certain color, and then find the center of the region that has that color.
IplImage* image = cvLoadImage( base_filename, CV_LOAD_IMAGE_COLOR );

split it into two halves for separate processing
IplImage* image_left = cvCreateImage( cvSize( image->width/2, image->height), IPL_DEPTH_8U, 3 );
cvSetImageROI( image, cvRect( 0, 0, image->width/2, image->height ) );
cvCopy( image, image_left );

convert it to hsv color space
IplImage* image_left_hsv = cvCreateImage( cvSize(image_left->width, image_left->height), IPL_DEPTH_8U, 3 );

get only the hue component using the COI '[color] Channel Of Interest' function
IplImage* image_left_hue = cvCreateImage( cvSize(image_left->width, image_left->height), IPL_DEPTH_8U, 1 );
cvSetImageCOI( image_left_hsv, 1);
cvCopy(image_left_hsv, image_left_hue);

find only the parts of an image within a certain hue range
cvInRangeS(image_left_hue, cvScalarAll(huemin), cvScalarAll(huemax), image_msk);

erode it down to get rid of noise
cvErode(image_msk,image_msk,NULL, 3);

and then find the centers of mass of the found regions
CvMoments moments;
cvMoments(image_msk, &moments;, 1);
double m00, m10, m01;

m00 = cvGetSpatialMoment(&moments;, 0,0);
m10 = cvGetSpatialMoment(&moments;, 1,0);
m01 = cvGetSpatialMoment(&moments;, 0,1);

// TBD check that m00 != 0
float center_x = m10/m00;
float center_y = m01/m00;

Copy the single channel mask back into a three channel rgb image

IplImage* image_rgb = cvCreateImage( cvSize(image_msk->width, image_msk->height), IPL_DEPTH_8U, 3 );
cvSetImageCOI( image_rgb, 2);
cvSetImageCOI( image_rgb, 0);

and draw circles on a temp image where the centers of mass are
cvCircle(image_rgb,cvPoint(int(center_x),int(center_y)), 10, CV_RGB(200,50,50),3);

All the work of setting channels of interest and regions of interest was new to me. I could have operated on images in place rather than creating many new ones, taking up more memory (and I would need to remember to free the memory created by all of them), but for debugging it's nice to keep around the intermediate steps.


mewantee example

I've made enough fixes to mewantee to open it open and allow most of it to be viewed without logging in, and creating a user no longer requires activation.

There isn't much on there right now, but I have a good example: There's a project called crossephex I was working on a few months ago, and I'll probably start on it again soon. It's supposed to be a vj/visuals generating tool for processing similar to gephex. I need a bunch of basic graphics to use as primitives to mix with each other to create interesting effects, so on mewantee I have this request, which asks for help from other people generating those graphics. See an example of those graphics at 3d design services , maya 3d models , sell 3d models . Each one shouldn't take more than a few minutes to make, of course I could do it myself but I think it's a good example of what the site might be good for.



I created a website called mewantee using google appengine. It's closed to the public right now, but I need some users to try it out and tell me if they run into any problems using it normally, or any feedback at all. If you login with a gmail account (google handles the login, I won't know anything except your email address, and even that will be hidden from other users), I'll be sent a notification email and I can then activate your account.

What is it about? Mainly I'd like it to incentivize the creation of creative commons and open source content and it uses a sort of economic model to do it. Even if it is too strange or the kind of users needed to make it work don't show up, it was a good exercise to learn python and appengine.

Something else to figure out- I have pointing to, is there any way to make it stay to everyone else like they way this blog is really on but is seen as


Gephex 0.4.3 updated for Ubuntu 8.10

Since there hasn't been a better version of Gephex since 0.4.3 (though I haven't tried compiling the repository recently, last time was not successful), I've downloaded the source and hacked it until it built on Ubuntu 8.10 updated to today:

I haven't tested it all the way, especially the video input modules, but it probably works.

Most of the changes have to do with updates to gcc, where it treats classname::method in cpp files as errors, and some files needed to include stdlib.h or string.h that didn't before. Also some structure definition in libavcodec had to be messed with- the static declaration removed.

nasm, qt3 in the form libqt3-headers, and libxv-dev had to be installed (and other non-standard things for 8.10 that I already had installed for other purposes). For qt3, flags for the include, bin, and lib dir needed to be passed to configure.

I had to run configure in the ffmpeg library and disable mmx with the --disable-mmx flag, putting that flag in the top-level makefile didn't work. My configuration specific makefiles are in the tarball so you would definitely have to rerun configure to override them.

Next I'll be creating a new custom gephex module for my ARToolkit multimarker UI project.



I've tested this build more extensively, and have discovered that the Ubuntu visual effects that are on by default cause the gephex output window to flicker. To disable them go to System | Preferences | Appearance | Visual Effects and select none. It's possible I need to build gephex with OpenGL support and these options will co-exist better.

Also, my screencap frei0r module I've depended on extensively in the past updates extremely slowly on the laptop I'm using currently, it may be an ATI thing (I originally developed it on an Nvidia system).


Marker Tracking as Visualization Interface

My idea is that I would be able to do an ARToolkit based visualization performance by using a clear table with markers I can slide, rotate, add and remove, and all those movement could correspond to events on screen. Unlike other AR videos the source video wouldn't be incorporated into the output necessarily, the markers provide an almost infinitely expressive set of UI knobs and sliders.

So far I have this:

AR User Interface from binarymillenium on Vimeo.

The lighting is difficult, the markers need to be white and black pixels but the plexiglass tends to produce reflections. Also if the light source itself is visible a marker will not be able to be right on top of it. I need a completely black backdrop under the plexiglass so there are no reflections that will obscure the markers, and also more numerous and softer diffuse lights.

One way to solve the reflection problem is to have the camera looking down at a table, though it's a little harder to get the camera up high enough, and I didn't want my hands or body to obscure the markers- the clear table idea is more elegant and self-contained.

The frame rate isn't very high, I need to work on making it all more real-time and responsive. It may have to be that one computer is capturing video and finding marker positions and sending them to another computer completely free to visualize it. Also more interpolation and position prediction could smooth things out, and cover up gaps if a marker isn't recognized in a frame, but that could produce more lag.