tag:blogger.com,1999:blog-280933882009-05-06T13:08:09.575-07:00binarymilleniumMy videos and photographs, and other creative efforts.binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.comBlogger89125tag:blogger.com,1999:blog-28093388.post-55213049602261539702009-05-02T12:39:00.000-07:002009-05-02T12:59:41.990-07:002009-05-02T12:59:41.990-07:00WeaponizersRusty of Seattle's <a href="http://www.hazardfactory.org/">HazardFactory</a>, known for <a href="http://www.flickr.com/groups/830505@N22//">power tool drag racing</a> and other events usually involving <a href="http://picasaweb.google.com/binarymillenium/20081231_hazardfactory_new_years#5286350731453237426">conflagrations of some sort</a>, is now turning cars into remote control tools of destruction:<br /><br /><object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/TgBNkKGcfYo&hl=en&fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/TgBNkKGcfYo&hl=en&fs=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object><br /><br />Some choice quotes on <a href="http://www.brepettis.com/blog/2009/5/1/real-car-wars-weaponizers.html">Bre's blog</a>.<br /><br /><a href="http://dsc.discovery.com/tv-schedules/series.html?paid=1.14710.25975.37228.1">Discovery channel listing</a><br /><br />The RC is probably great fun and well suited for television but I'd like to see (or better yet work on) an autonomous version of this, or incorporate more autonomous elements into it, though the cars would probably be an order or three of magnitude more expensive. Which is why you should tune in and make sure the ratings are high so they can afford to destroy $5K lidars or gimbal mounts in future iterations...<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-5521304960226153970?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-5462257462600775612009-04-06T06:05:00.000-07:002009-04-06T06:26:09.665-07:002009-04-06T06:26:09.665-07:00OpenCV example, and why does Google do so poorly?Take searching for cvGetSpatialMoment:<br /><a href="http://www.google.com/search?hl=en&q=cvGetSpatialMoment&btnG=Google+Search&aq=f&oq=">http://www.google.com/search?hl=en&q=cvGetSpatialMoment&btnG=Google+Search&aq=f&oq=</a><br /><br />All the top results are nearly useless, just code that doesn't help much if you don't know what cvGetSpatialMoment does.<br /><br />The "CV Reference Manual" that comes with an install of OpenCV probably should come up first (the local html files of course aren't google searchable), or any real text explanation or tutorial of the function. So scrolling down further there are some odd but useful sites like <a href="http://www.ieeta.pt/~jmadeira/OpenCV/OpenCVdocs/ref/opencvref_cv.htm">http://www.ieeta.pt/~jmadeira/OpenCV/OpenCVdocs/ref/opencvref_cv.htm</a>. I guess the official <a href="http://opencv.willowgarage.com/wiki/CxCore">Willow Garage docs here</a> haven't been linked to enough.<br /><br />The <a href="http://books.google.com/books?id=seAgiOfu2EIC&printsec=frontcover&dq=opencv#PPP1,M1">official OpenCV book on Google</a> is highly searchable, some pages are restricted but many are not.<br /><br />Through all that frustration I did manage to learn a lot of basics to load an image and process a portion of the image to look for a certain color, and then find the center of the region that has that color.<br /><br /><blockquote><code>IplImage* image = cvLoadImage( base_filename, CV_LOAD_IMAGE_COLOR );</code></blockquote><br /><br />split it into two halves for separate processing<br /><blockquote><code>IplImage* image_left = cvCreateImage( cvSize( image->width/2, image->height), IPL_DEPTH_8U, 3 );<br />cvSetImageROI( image, cvRect( 0, 0, image->width/2, image->height ) );<br />cvCopy( image, image_left );</code></blockquote><br /><br />convert it to hsv color space<br /><blockquote><code> IplImage* image_left_hsv = cvCreateImage( cvSize(image_left->width, image_left->height), IPL_DEPTH_8U, 3 );<br />cvCvtColor(image_left,image_left_hsv,CV_BGR2HSV);</code></blockquote><br /><br />get only the hue component using the COI '[color] Channel Of Interest' function<br /><blockquote><code>IplImage* image_left_hue = cvCreateImage( cvSize(image_left->width, image_left->height), IPL_DEPTH_8U, 1 );<br />cvSetImageCOI( image_left_hsv, 1);<br />cvCopy(image_left_hsv, image_left_hue); </code></blockquote><br /><br />find only the parts of an image within a certain hue range<br /><blockquote><code>cvInRangeS(image_left_hue, cvScalarAll(huemin), cvScalarAll(huemax), image_msk);</code></blockquote><br /><br />erode it down to get rid of noise<br /><blockquote><code>cvErode(image_msk,image_msk,NULL, 3);</code></blockquote><br /><br />and then find the centers of mass of the found regions <br /><blockquote><code>CvMoments moments;<br /> cvMoments(image_msk, &moments, 1);<br /> double m00, m10, m01;<br /><br /> m00 = cvGetSpatialMoment(&moments, 0,0);<br /> m10 = cvGetSpatialMoment(&moments, 1,0);<br /> m01 = cvGetSpatialMoment(&moments, 0,1);<br /> <br /> // TBD check that m00 != 0<br /> float center_x = m10/m00;<br /> float center_y = m01/m00;</code></blockquote><br /><br />Copy the single channel mask back into a three channel rgb image<br /><code><blockquote> <br /> IplImage* image_rgb = cvCreateImage( cvSize(image_msk->width, image_msk->height), IPL_DEPTH_8U, 3 );<br /> cvSetImageCOI( image_rgb, 2);<br /> cvCopy(image_msk,image_rgb);<br /> cvSetImageCOI( image_rgb, 0);</code></blockquote><br /><br />and draw circles on a temp image where the centers of mass are<br /><blockquote><code>cvCircle(image_rgb,cvPoint(int(center_x),int(center_y)), 10, CV_RGB(200,50,50),3);</code></blockquote><br /><br />All the work of setting channels of interest and regions of interest was new to me. I could have operated on images in place rather than creating many new ones, taking up more memory (and I would need to remember to free the memory created by all of them), but for debugging it's nice to keep around the intermediate steps.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-546225746260077561?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-84487658282475935602009-03-29T09:42:00.001-07:002009-03-29T09:51:10.866-07:002009-03-29T09:51:10.866-07:00mewantee exampleI've made enough fixes to <a href="http://mewantee.com">mewantee</a> to open it open and allow most of it to be viewed without logging in, and creating a user no longer requires activation.<br /><br />There isn't much on there right now, but I have a good example: There's a project called <a href="http://code.google.com/p/crossephex/">crossephex</a> I was working on a few months ago, and I'll probably start on it again soon. It's supposed to be a vj/visuals generating tool for <a href="http://processing.org">processing</a> similar to gephex. I need a bunch of basic graphics to use as primitives to mix with each other to create interesting effects, so on mewantee I have <a href="http://mewantee.com/request/4">this request</a>, which asks for help from other people generating those graphics. Each one shouldn't take more than a few minutes to make, of course I could do it myself but I think it's a good example of what the site might be good for.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-8448765828247593560?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-21580223681422516552009-03-25T19:02:00.000-07:002009-03-25T19:15:56.251-07:002009-03-25T19:15:56.251-07:00mewantee!I created a website called <a href="http://mewantee.com">mewantee</a> using google appengine. It's closed to the public right now, but I need some users to try it out and tell me if they run into any problems using it normally, or any feedback at all. If you login with a gmail account (google handles the login, I won't know anything except your email address, and even that will be hidden from other users), I'll be sent a notification email and I can then activate your account.<br /><br />What is it about? Mainly I'd like it to incentivize the creation of creative commons and open source content and it uses a sort of economic model to do it. Even if it is too strange or the kind of users needed to make it work don't show up, it was a good exercise to learn python and appengine.<br /><br />Something else to figure out- I have mewantee.com pointing to mewanteee.appspot.com, is there any way to make it stay mewantee.com to everyone else like they way this blog is really on blogspot.com but is seen as binarymillenium.com.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-2158022368142251655?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com2tag:blogger.com,1999:blog-28093388.post-5685129357795953352009-02-21T13:35:00.000-08:002009-02-23T19:56:36.403-08:002009-02-23T19:56:36.403-08:00Gephex 0.4.3 updated for Ubuntu 8.10Since there hasn't been a better version of Gephex since 0.4.3 (though I haven't tried compiling the repository recently, last time was not successful), I've downloaded the source and hacked it until it built on Ubuntu 8.10 updated to today:<br /><br /><a href="http://binarymillenium.googlecode.com/files/gephex-0.4.3updated.tgz">http://binarymillenium.googlecode.com/files/gephex-0.4.3updated.tgz</a><br /><br />I haven't tested it all the way, especially the video input modules, but it probably works. <br /><br />Most of the changes have to do with updates to gcc, where it treats classname::method in cpp files as errors, and some files needed to include stdlib.h or string.h that didn't before. Also some structure definition in libavcodec had to be messed with- the static declaration removed.<br /><br />nasm, qt3 in the form libqt3-headers, and libxv-dev had to be installed (and other non-standard things for 8.10 that I already had installed for other purposes). For qt3, flags for the include, bin, and lib dir needed to be passed to configure.<br /><br />I had to run configure in the ffmpeg library and disable mmx with the --disable-mmx flag, putting that flag in the top-level makefile didn't work. My configuration specific makefiles are in the tarball so you would definitely have to rerun configure to override them.<br /><br />Next I'll be creating a new custom gephex module for my ARToolkit multimarker UI project.<br /><br />----<br /><br />Update<br /><br />I've tested this build more extensively, and have discovered that the Ubuntu visual effects that are on by default cause the gephex output window to flicker. To disable them go to System | Preferences | Appearance | Visual Effects and select none. It's possible I need to build gephex with OpenGL support and these options will co-exist better.<br /><br />Also, my <A href="http://code.google.com/p/binarymillenium/source/browse/#svn/trunk/screencap%3Fstate%3Dclosed">screencap frei0r module</a> I've depended on extensively in the past updates extremely slowly on the laptop I'm using currently, it may be an ATI thing (I originally developed it on an Nvidia system).<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-568512935779595335?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-40516793378779928862009-02-18T07:17:00.000-08:002009-02-18T07:32:34.657-08:002009-02-18T07:32:34.657-08:00Marker Tracking as Visualization InterfaceMy idea is that I would be able to do an ARToolkit based visualization performance by using a clear table with markers I can slide, rotate, add and remove, and all those movement could correspond to events on screen. Unlike other AR videos the source video wouldn't be incorporated into the output necessarily, the markers provide an almost infinitely expressive set of UI knobs and sliders. <br /><br />So far I have this:<br /><br /><object width="400" height="225"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=3264793&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=3264793&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object><br /><a href="http://vimeo.com/3264793">AR User Interface</a> from <a href="http://vimeo.com/user168788">binarymillenium</a> on <a href="http://vimeo.com">Vimeo</a>.<br /><br />The lighting is difficult, the markers need to be white and black pixels but the plexiglass tends to produce reflections. Also if the light source itself is visible a marker will not be able to be right on top of it. I need a completely black backdrop under the plexiglass so there are no reflections that will obscure the markers, and also more numerous and softer diffuse lights.<br /><br />One way to solve the reflection problem is to have the camera looking down at a table, though it's a little harder to get the camera up high enough, and I didn't want my hands or body to obscure the markers- the clear table idea is more elegant and self-contained.<br /><br />The frame rate isn't very high, I need to work on making it all more real-time and responsive. It may have to be that one computer is capturing video and finding marker positions and sending them to another computer completely free to visualize it. Also more interpolation and position prediction could smooth things out, and cover up gaps if a marker isn't recognized in a frame, but that could produce more lag.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-4051679337877992886?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com1tag:blogger.com,1999:blog-28093388.post-13710874575454697172009-01-29T06:37:00.000-08:002009-02-03T06:39:44.365-08:002009-02-03T06:39:44.365-08:00Bundler - the Photosynth core algorithms GPLed<a href="http://www.flickr.com/photos/binarymillenium/3243645203/" title="bundler 212009 65922 AM.bmp by binarymillenium, on Flickr"><img src="http://farm4.static.flickr.com/3094/3243645203_77818b9561.jpg" width="500" height="333" alt="bundler 212009 65922 AM.bmp" /></a><br />[update- the output of bundler is less misaligned looking than this, I was incorrectly displaying the results here and in the video]<br /><br />Bundler (<a href="http://phototour.cs.washington.edu/bundler">http://phototour.cs.washington.edu/bundler</a>) takes photographs and can create 3D point clouds and camera positions derived from them similar to what Photosynth does- this is called structure from motion. It's hard to believe this has been out as long as the publically available Photosynth but I haven't heard about it- it seems to be in stealth mode.<br /><br /><object width="400" height="225"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=3035817&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=3035817&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object><br /><a href="http://vimeo.com/3035817">Bundler - GPLed Photosynth - Car</a> from <a href="http://vimeo.com/user168788">binarymillenium</a> on <a href="http://vimeo.com">Vimeo</a>.<br /><br />From that video it is apparent that highly textured flat surfaces do best. The car is reflective and dull grey and so generates few correspondences, but the hubcaps, license plate, parking strip lines, and grass and trees work well. I wonder if this could be combined with a space carving technique to get a better car out of it.<br /><br />It's a lot rougher around the edges lacking the Microsoft Live Labs contribution, a few sets I've tried have crashed with messages like "RunBundler.sh: line 60: 2404 Segmentation fault (core dumped) $MATCHKEYS list_keys.txt matches.init.txt" or sometimes individual images throw it with "This application has requested the Runtime to terminate it..." but it appears to plow through (until it reaches that former error). <br /><br />Images without good EXIF data trip it up, the other day I was trying to search flickr and find only images that have EXIF data and allow full view, but am not successful so far. Some strings supposed limit search results by focal length, which seems like would limit results only to EXIF, but that wasn't the case.<br /><br />Bundler outputs ply files, which can be read in <a href="http://meshlab.sourceforge.net/">Meshlab</a> with the modification that these two lines be added to ply header:<br /><br />element face 0<br />property list uchar int vertex_index<br /><br />Without this Meshlab will give an error about there being no faces, and give up.<br /><br />Also I have some Processing software that is a little less user friendly but doesn't require the editing:<br /><a href="http://code.google.com/p/binarymillenium/source/browse/trunk/processing/bundler/"><br />http://code.google.com/p/binarymillenium/source/browse/trunk/processing/bundler/</a><br /><br />Bundler can't handle filenames with spaces right now, I think I can fix this myself without too much work, it's mostly a matter of making sure names are passed everywhere with quotes around them.<br /><br />Multi-megapixel files load up sift significantly until it crashes after taking a couple of gigabytes of memory (and probably not able to get more from windows):<br /><code><br />...<br />[Found in EXIF tags]<br /> [CCD width = 5.720mm]<br /> [Resolution = 3072 x 2304]<br /> [Focal length (pixels) = 3114.965<br />[Found 18 good images]<br />[- Extracting keypoints -]<br /><br />This application has requested the Runtime to terminate it in an unusual way.<br />Please contact the application's support team for more information.<br /></code><br /><br />Resizing them to 1600x1200 worked without crashing and took only a few hundred megabytes of memory per image, so more megapixels may work as well.<br /><br />The most intriguing feature is the incremental option, I haven't tested it yet but it promises to be able to take new images and incorporate them into existing bundles. Unfortunately each new image has a matching time proportional to the number of previous images- maybe it would be possible to incrementally remove images also, or remove found points that are in regions that already have high point densities?<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-1371087457545469717?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com8tag:blogger.com,1999:blog-28093388.post-58171015148024571672009-01-24T23:20:00.000-08:002009-01-24T23:37:59.298-08:002009-01-24T23:37:59.298-08:00Nested blocks in Django duplication problemsThe official instructions and cursory google searches didn't turn up a good explanation, but I've figured it out for myself. I was confused about nesting blocks, sometimes getting no output or getting duplicate output.<br /><br />In this example the base has a single level of nesting with two sub-blocks.<br /><br />base.html:<br /><code><br />{% block outer %}<br /><br />{% block inner1 %}<br />this is inner1<br />{% endblock inner1 %}<br /><br /><br />{% block inner2 %}<br />this is inner2<br />{% endblock inner2 %}<br /><br />{% endblock outer %}<br /></code><br /><br />This file duplicate the original block structure but adds a comment:<br />some.html:<br /><code><br />{% extends base.html %}<br /><br />{% block outer %}<br />{{ block.super }}<br /><br />new stuff<br /><br />{% endblock outer %}<br /></code><br /><br />The output would be<br /><blockquote>this is inner1<br />this is inner 2<br />new stuff</blockquote><br /><br />Moving the 'new stuff' line before the the block.super would swap the order of the output statements. There is no way to interject the new comment inbetween inner1 and inner2 without creating a new block that sits inbetween them in the parent base.html file.<br /><br /><br /><br />Don't try to do this (which is what I thought to do initially):<br /><br /><code><br />{% extends base.html %}<br /><br />{% block outer %}<br />{{ block.super }}<br /><br />new stuff<br /><br />{% block inner2 %}<br />new inner2<br />{% endblock inner2 %}<br /><br />{% endblock outer %}<br /></code><br /><br />It will result in duplication like this:<br /><blockquote>this is inner1<br />new inner2<br />new stuff<br />new inner2</blockquote><br /><br /><br />Instead, the extending file that wants to alter any parent block does it in a non-nested way, don't redefine an inherited block while inside of another inherited block:<br /><code><br />{% extends base.html %}<br /><br />{% block outer %}<br />{{ block.super }}<br /><br />new stuff<br /><br />{% endblock outer %}<br /><br />{% block inner2 %}<br />new inner2<br />{% endblock inner2 %}<br /></code><br /><br />And now the output will be without duplication. <br /><br /><blockquote>this is inner1<br />new inner2<br />new stuff<br /></blockquote><br /><br /><br />block.super needs to be in there or the redefinition of inner2 won't be applied to anything.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-5817101514802457167?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-80672308177167356872009-01-04T09:13:00.000-08:002009-01-04T09:34:46.154-08:002009-01-04T09:34:46.154-08:00Laser ScanningThe idea is to project laser lines onto a flat surface, image them, and then put objects in front of the surface and compute the displacement made by the object.<br /><br />Here is the flat base with a line on it:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_apbW4BHhLTg/SWDu9q5noiI/AAAAAAAABb8/dRvY_HboZq0/s1600-h/base1001.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 320px;" src="http://4.bp.blogspot.com/_apbW4BHhLTg/SWDu9q5noiI/AAAAAAAABb8/dRvY_HboZq0/s400/base1001.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5287488705788355106" /></a><br /><br />Here is the line at the same position with objects intersecting:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_apbW4BHhLTg/SWDvXIIBIXI/AAAAAAAABcE/Ae0_83lxe6w/s1600-h/misc1001.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 320px;" src="http://4.bp.blogspot.com/_apbW4BHhLTg/SWDvXIIBIXI/AAAAAAAABcE/Ae0_83lxe6w/s400/misc1001.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5287489143130104178" /></a><br /><br />Finding depth involves figuring out what the 2d projection of the normal line that is perpendicular to the wall at any point along the laser line. I'm working on this but it's also possible to guess an average line for low precision demonstration. The software looks for all points where it believes the laser is shining, and then computes the intersection of the normal line with the original object free laser line, and gets depth.<br /><br />I had about 8 different images from laser lines, here are the results from two:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_apbW4BHhLTg/SWDydC0oPgI/AAAAAAAABcM/wyu48viKbDM/s1600-h/laserdepth1.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 323px;" src="http://3.bp.blogspot.com/_apbW4BHhLTg/SWDydC0oPgI/AAAAAAAABcM/wyu48viKbDM/s400/laserdepth1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5287492543320702466" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_apbW4BHhLTg/SWDyhIpukEI/AAAAAAAABcU/sDxCi28ULMI/s1600-h/laserdepth2.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 307px;" src="http://1.bp.blogspot.com/_apbW4BHhLTg/SWDyhIpukEI/AAAAAAAABcU/sDxCi28ULMI/s400/laserdepth2.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5287492613605068866" /></a><br /><br />The yellow lines are the projected normals from the base line to the found laser line on the backpack and broom. There are some spurious results, and also on the dark woven backpack material the laser was not always reflected strongly enough to register. <br /><br /><br />The source code is here:<br /><br />http://code.google.com/p/binarymillenium/source/browse/trunk/processing/laserscan<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-8067230817716735687?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-10684295626527965682008-12-09T20:30:00.000-08:002008-12-09T20:44:15.843-08:002008-12-09T20:44:15.843-08:00Multicamera Balloon Imagery<object width="400" height="225"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=2470571&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=2470571&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object><br /><a href="http://vimeo.com/2470571">AHAB Tether Test</a> from <a href="http://vimeo.com/user168788">binarymillenium</a> on <a href="http://vimeo.com">Vimeo</a>.<br /><br />I originally was reminded of the unused source images because another team member recently posted some pictures from their camera, and then I made this, and then <a href="http://brepettis.com/blog/2008/12/09/diy-space/">Bre posted about it</a>, so it's getting a lot of positive feedback.<br /><br /> This video would have been made earlier but I had assumed that the cameras were screwy and firing at different times and image sequences would not line up at all- turns out they did, they just had wildly different start points.<br /><br />Also I finished this school project that was kind of a simple occupancy grid inspired thing, this video shows parts of it:<br /><br /><object width="400" height="225"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=2383465&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=2383465&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object><br /><a href="http://vimeo.com/2383465">2.5D</a> from <a href="http://vimeo.com/user168788">binarymillenium</a> on <a href="http://vimeo.com">Vimeo</a>.<br /><br />I might revisit some of this and get the registration code working (and working a lot faster), instead of cheating and using the known camera position and attitude.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-1068429562652796568?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-11990918494182683432008-11-21T08:34:00.000-08:002008-11-28T16:56:53.672-08:002008-11-28T16:56:53.672-08:00Depth buffer to 3d coordinates?I'm having trouble transforming screen coordinates back to 3d, which <a href="http://processing.org/discourse/yabb_beta/YaBB.cgi?board=OpenGL;action=display;num=1227285113;start=0#0">this post</a> describes- can anyone help me?<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_apbW4BHhLTg/SSbj5x_kgfI/AAAAAAAABVo/tfMC1rPf8JI/s1600-h/depth10000.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 400px;" src="http://4.bp.blogspot.com/_apbW4BHhLTg/SSbj5x_kgfI/AAAAAAAABVo/tfMC1rPf8JI/s400/depth10000.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5271150995695763954" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_apbW4BHhLTg/SSbkC6DhS-I/AAAAAAAABVw/06OGdPIrmVU/s1600-h/frame00001.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 200px;" src="http://1.bp.blogspot.com/_apbW4BHhLTg/SSbkC6DhS-I/AAAAAAAABVw/06OGdPIrmVU/s400/frame00001.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5271151152478637026" /></a><br /><br />---<br />Update - I've got it figured out now, I should have been using gluUnProject:<br /><br /><code><br /><br />FloatBuffer fb;<br /><br />fb = BufferUtil.newFloatBuffer(width*height);<br /><br />gl.glReadPixels(0, 0, width, height, GL.GL_DEPTH_COMPONENT, GL.GL_FLOAT, fb); <br />fb.rewind();<br /><br />int viewport[] = new int[4]; <br />double[] proj=new double[16];<br />double[] model=new double[16];<br />gl.glGetIntegerv(GL.GL_VIEWPORT, viewport, 0);<br />gl.glGetDoublev(GL.GL_PROJECTION_MATRIX,proj,0);<br />gl.glGetDoublev(GL.GL_MODELVIEW_MATRIX,model,0);<br /><br />...<br />for(int i...<br />for (int j...<br />...<br />glu.gluUnProject(i,height-j,rawd, model,0,proj,0,viewport,0,pos,0); <br />float d = (float)-pos[2];<br /></code><br /><br />After all that depth d will be linear and in proper world coordinates.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-1199091849418268343?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-71742266680154573952008-11-20T07:02:00.000-08:002008-11-20T07:11:35.602-08:002008-11-20T07:11:35.602-08:00Artoolkit + rangefinderSince my relatively inexpensive <a href="/2008/10/artoolkit-rangefinding-continued.html">purely visual depth map</a> approach wasn't that successful, I've tried it out using a rangefinder instead of a visible laser. This means I can point the video camera straight at the marker (which is attached to the rangefinder), and it can point at anything provided I don't tilt it so the camera can't see the marker/fiducial.<br /><br />This is the result:<br /><br /><object width="400" height="225"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=2294333&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=2294333&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object><br /><a href="http://vimeo.com/2294333">Artoolkit with a Rangefinder</a> from <a href="http://vimeo.com/user168788">binarymillenium</a> on <a href="http://vimeo.com">Vimeo</a>.<br /><br />The following plots show the tracked attitude of the rangefinder as measured by ARToolkit:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_apbW4BHhLTg/SSV8p4_g8kI/AAAAAAAABUY/7uUlOalikJo/s1600-h/camera_angle_side.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 293px;" src="http://3.bp.blogspot.com/_apbW4BHhLTg/SSV8p4_g8kI/AAAAAAAABUY/7uUlOalikJo/s400/camera_angle_side.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5270755998022300226" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_apbW4BHhLTg/SSV8iB5L4wI/AAAAAAAABUQ/bYvT63vGNlA/s1600-h/camera_angle_top.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 293px;" src="http://3.bp.blogspot.com/_apbW4BHhLTg/SSV8iB5L4wI/AAAAAAAABUQ/bYvT63vGNlA/s400/camera_angle_top.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5270755862972719874" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_apbW4BHhLTg/SSV8R4Ch3gI/AAAAAAAABUA/fvoejwdpL2E/s1600-h/camera_angle_45.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 293px;" src="http://4.bp.blogspot.com/_apbW4BHhLTg/SSV8R4Ch3gI/AAAAAAAABUA/fvoejwdpL2E/s400/camera_angle_45.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5270755585449647618" /></a><br /><br />My left to right bottom to top scanning approach is very apparent.<br /><br /><br />And here is the tracked attitude (as a 3-component vector) plus the range vs. time:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_apbW4BHhLTg/SSV8_pYKGbI/AAAAAAAABUg/pjBm8I-gMQ0/s1600-h/dist.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 295px;" src="http://4.bp.blogspot.com/_apbW4BHhLTg/SSV8_pYKGbI/AAAAAAAABUg/pjBm8I-gMQ0/s400/dist.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5270756371787815346" /></a><br /><br />You can see how cyclical it is, as I scan the floor in front of me the range doesn't change much until I reach one end and tilt the tripod up a little, and then later on I start to capture the two wheels of the car.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-7174226668015457395?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-34009915492396946262008-11-09T08:16:00.000-08:002008-11-09T08:49:24.451-08:002008-11-09T08:49:24.451-08:00University of Washington BioRobotics LabI took a tour of the UW BioRobotics Lab, where an old professor of mine works on telerobotics with haptic interfaces.<br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/XxZH072pTNFK7KzeBbvvGQ"><img src="http://lh5.ggpht.com/_apbW4BHhLTg/SRcLo6O2SiI/AAAAAAAABR0/D1SlgGqL8gQ/s400/DSC_8019.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right"></td></tr></table><br /><br />This is a <a href="http://brl.ee.washington.edu/Research_Active/Surgery/Project_07/Project_07.html"> surgery robot called 'The Raven'</a>. It's mostly camouflaged due to the large amounts of detail and contrast in the robot itself and in the background. The DV camera is going to be replaced by a pair of HD cameras that will provide stereo vision.<br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/AM0mNj0RMzh6YYmtJ7GCnw"><img src="http://lh5.ggpht.com/_apbW4BHhLTg/SRcLrb-sf5I/AAAAAAAABR8/tOwvo9RdGzI/s400/DSC_8020.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right"></td></tr></table><br /><br />Multiple motors pull on cables seen in a later photo that control the manipulator end of the arm.<br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/J_HpC1zhsjfz7FV8ueeMGw"><img src="http://lh6.ggpht.com/_apbW4BHhLTg/SRcLucmRmtI/AAAAAAAABSE/mJzMq_mgI5U/s400/DSC_8021.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right"></td></tr></table><br /><br /><a href="http://picasaweb.google.com/lh/photo/OwHZDdFb9UejpZlGwVam4A"><img src="http://lh5.ggpht.com/_apbW4BHhLTg/SRcLvxfWcvI/AAAAAAAABSM/Fj5cv_kq2Z0/s400/DSC_8022.jpg" /></a><br /><br /><a href="http://picasaweb.google.com/lh/photo/chP-klXN-TqNWLkf_JSRJw"><img src="http://lh3.ggpht.com/_apbW4BHhLTg/SRcLxXiW39I/AAAAAAAABSU/EseschwUoiU/s400/DSC_8023.jpg" /></a><br /><br /><br /><a href="http://picasaweb.google.com/lh/photo/1G00m4EK58H1Eweba_uxRw"><img src="http://lh6.ggpht.com/_apbW4BHhLTg/SRcL4g7kXQI/AAAAAAAABSs/tc6v-v49Ohc/s400/DSC_8026.jpg" /></a><br /><br /><a href="http://www.ee.washington.edu/people/faculty/hannaford/">Blake Hannaford</a> shows the arms that will replace the manually positioned arms seen in the previous photos.<br /><br /><a href="http://picasaweb.google.com/lh/photo/0cksb52fO6X1xLE8ikET6w"><img src="http://lh3.ggpht.com/_apbW4BHhLTg/SRcL7QV2lPI/AAAAAAAABS0/WUrct79qidY/s400/DSC_8027.jpg" /></a><div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-3400991549239694626?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-49926447499593725802008-10-14T20:35:00.001-07:002008-10-14T20:46:00.990-07:002008-10-14T20:46:00.990-07:00<a href="http://picasaweb.google.com/binarymillenium/20081012_garden_bee#"><img src="http://picasaweb.google.com/lh/photo/9pMkX5MwRNqfA437l1Rfrw?authkey=fgmD_u59U30"><img src="http://lh4.ggpht.com/binarymillenium/SPVlRN5K7xI/AAAAAAAABIU/SGNPsY1S2Oc/s800/hang_in_there_bee2.jpg" title="hang in there"/></img></a><br /><br />Hey lil feller. I'm just going to pick you up with this stick and you can hang on, take you back to the garden. I didn't mean to wake you up from your hibernation or whatever.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-4992644749959372580?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-32558404436347162792008-10-07T21:10:00.000-07:002008-10-08T07:20:06.454-07:002008-10-08T07:20:06.454-07:00Artoolkit rangefinding continuedI've discovered the <a href="http://artoolkit.sourceforge.net/apidoc/param_8h.html#e31d34be66699b8343aedaef1d627777">arParamObserv2Ideal() </a> function to correct for image distortion, and I think it has improved things. But my main problem is figuring out how to properly project the line from the origin through the laser dot in marker/fiducial space. I have a message out on the mailing list but it is not that active.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_apbW4BHhLTg/SOwyypGnpMI/AAAAAAAABEU/knfZ0OEQ78Y/s1600-h/output2.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="http://4.bp.blogspot.com/_apbW4BHhLTg/SOwyypGnpMI/AAAAAAAABEU/knfZ0OEQ78Y/s400/output2.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5254630710842991810" /></a><br /><br />The results of my crude <a href="http://code.google.com/p/binarymillenium/source/browse/trunk/#trunk/processing/fillgaps">fillgaps</a> processing app are shown below, using the somewhat sparse points from above.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_apbW4BHhLTg/SOw0FI0A3jI/AAAAAAAABEc/YgAb_9pHl-o/s1600-h/output3.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="http://3.bp.blogspot.com/_apbW4BHhLTg/SOw0FI0A3jI/AAAAAAAABEc/YgAb_9pHl-o/s400/output3.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5254632128104160818" /></a><br /><br />The above results look pretty good- the points along the edge of the wall and floor are the furthest from the camera so appear black, and the floor and wall toward the top and bottom of the image are closer and get brighter.<br /><br />My main problem is getting live feedback of where I've gotten points. With a live view that showed all found depth points it would be easier to achieve uniform coverage, rather than going off of memory. My problem there is that to use artoolkit I have to detect the markers, then shrink the image down and draw dots over it- not too hard sounding but the first time I tried it got all messed up.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-3255840443634716279?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-91989292067618877662008-10-05T21:43:00.000-07:002008-10-07T21:39:07.074-07:002008-10-07T21:39:07.074-07:00Depth Maps with ARToolkit and a Laser PointerFlying home from a recent trip to the east coast, I tried to figure out what the most inexpensive method for approximating scanning lidar would be. This is my answer:<br /><br /><object width="400" height="225"> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=1897078&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" /> <embed src="http://vimeo.com/moogaloop.swf?clip_id=1897078&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object><br /><a href="http://vimeo.com/1897078?pg=embed&amp;sec=1897078">ARToolKit assisted laser rangefinding</a> from <a href="http://vimeo.com/user168788?pg=embed&amp;sec=1897078">binarymillenium</a> on <a href="http://vimeo.com?pg=embed&amp;sec=1897078">Vimeo</a>.<br /><br /><br />It's not that inexpensive, since I'm using a high resolution network camera similar to an Elphel- but it's possible I could replace it with a good consumer still camera with a linux supported usb interface for getting live images.<br /><br /><br /><a href="http://picasaweb.google.com/lh/photo/U_aq3uoxPU2VAjekjECanA?authkey=fgmD_u59U30"><img src="http://lh3.ggpht.com/binarymillenium/SOmfhfvQ6UI/AAAAAAAABDc/oDCuBVsxTvw/s400/artoolkitlaser.png" /></a><br /><br />In the above screen shot the line projected from the found fiducial is shown, and the target where the found red laser dot is- they ought to cross each other but I need to learn more about transforming coordinates in and out of the camera space in ARToolkit to improve upon it.<br /><br /><a href="http://picasaweb.google.com/lh/photo/P2mUCM-NBNssM2Gg3gMFfg?authkey=fgmD_u59U30"><img src="http://lh6.ggpht.com/binarymillenium/SOmUz51lTVI/AAAAAAAABC8/YryXJFQtpZs/s400/now0001.jpg" /></a><br /><br /><a href="http://picasaweb.google.com/lh/photo/olFcKziqe5H_4k2T6Xmf3g?authkey=fgmD_u59U30"><img src="http://lh3.ggpht.com/binarymillenium/SOmU0EIhutI/AAAAAAAABDM/X_lzrEfGcuI/s400/output_good.png" /></a><br /><br /><a href="http://picasaweb.google.com/lh/photo/GJGiClHNWXHgoPS6UNp0xA?authkey=fgmD_u59U30"><img src="http://lh3.ggpht.com/binarymillenium/SOmU0NvBkTI/AAAAAAAABDE/F_YufxTtw24/s400/output_good_composite.jpg" /></a><br /><br />This picture shows that the left side of the screen is 'further' away than the wall on the right, but that is not quite right- it is definitely further away from the fiducial, so I may be making an error.<br /><br /><a href="http://code.google.com/p/binarymillenium/source/browse/trunk/artoolkit/laser">source code</a><div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-9198929206761887766?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-61179335139317735362008-10-05T09:59:00.000-07:002008-11-09T08:49:36.302-08:002008-11-09T08:49:36.302-08:00Intel Research Seattle - Open House<table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/ik_UmEYwc8ZjhJNeCjf0DQ"><img src="http://lh3.ggpht.com/binarymillenium/SORD6tpYj8I/AAAAAAAAA98/Y5Aeavzdmwk/s400/DSC_7809.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br /><span style="font-weight:bold;">Robotic Arm</span><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/iK9dqQiI03BI6fTkIjeUuQ"><img src="http://lh6.ggpht.com/binarymillenium/SORDZd30OVI/AAAAAAAAA7Y/HyikAOu1peI/s400/DSC_7785.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/NC1uWHPvpmeV5Vod-HOzQA"><img src="http://lh4.ggpht.com/binarymillenium/SORDchUJ94I/AAAAAAAAA7w/YCKWSMrlWMA/s400/DSC_7792.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br />This robot had a camera and em field sensors in its hand, and could detect the presence of an object within grasping distant. Some objects it had been trained to recognize, and others it did not but would attempt to grab anyway. Voice synthesis provided additional feedback- most humorously when it accidently (?) dropped something it said 'oops'. Also motor feedback sensed when the arm was pushed on, and the arm would give way- making it much safer than an arm that would blindly power through any obstacle.<br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/oFBC7KYIWVahZUx7YbPMvg"><img src="http://lh4.ggpht.com/binarymillenium/SORDahjNPsI/AAAAAAAAA7g/3IpE5B9XVFU/s400/DSC_7786.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/ubKWf-72K5h-pgPb-42yiw"><img src="http://lh5.ggpht.com/binarymillenium/SORDbgkea_I/AAAAAAAAA7o/9Kar0dqKf8M/s400/DSC_7789.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br /><b>It's insane, this application level taint</b><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/p8zcdYvAb__PYaps3osdEw"><img src="http://lh5.ggpht.com/binarymillenium/SORDqtuKUbI/AAAAAAAAA80/iaRuJw6UG2A/s400/DSC_7800.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br />I don't really know what this is about...<br /><br /><br /><b>Directional phased array wireless networking</b><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/jswwJWLRwppU1xiW_oEOJw"><img src="http://lh3.ggpht.com/binarymillenium/SORDrz_3ZgI/AAAAAAAAA88/Y4k5Ns-ojuI/s400/DSC_7801.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br />The phased array part is pretty cool, but the application wasn't that compelling: Using two directional antennas a select zone can be provided with wireless access and other zones not overlapped by the two excluded. Maybe if it's more power efficient that way?<br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/Ab-JOGWC3SHaYzEe3wp6Vg"><img src="http://lh4.ggpht.com/binarymillenium/SORDuF7PdmI/AAAAAAAAA9E/Q0UwrmEq4eY/s400/DSC_7802.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/YNXADR0MizM8uss-vYu3KQ"><img src="http://lh6.ggpht.com/binarymillenium/SORDvu6_CII/AAAAAAAAA9M/lbkyRSfjcHI/s400/DSC_7803.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br />This antenna also had a motorized base, so that comparisons between physically rotating the antenna and rotating the field pattern could be made.<br /><br /><b>Haptic squeeze</b><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/qSBu_NNnsFvPN7nn-5uQZg"><img src="http://lh6.ggpht.com/binarymillenium/SORD1Le5oMI/AAAAAAAAA9k/DaWh8AfTDSI/s400/DSC_7806.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br />This squeeze thing has a motor in it to resist pressure, but was broken at the time I saw it. The presenter said it wasn't really intended to simulate handling of real objects in virtual space like other haptic interfaces might, but be used more abstractly as a interface to anything.<br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/3Nt2spvaG_g-i_8KVqd9nQ"><img src="http://lh3.ggpht.com/binarymillenium/SORD4HnG2jI/AAAAAAAAA90/1cc-O2VgFYk/s400/DSC_7808.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br /><b>RFID Accelerometer</b><br /><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/ygbK_shWTDW3SnNRRYwvUg"><img src="http://lh5.ggpht.com/binarymillenium/SORD-fiHFsI/AAAAAAAAA-M/4a7kJFAdu8o/s400/DSC_7815.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/AQZvs7DJLFojUXhZZUYKxg"><img src="http://lh3.ggpht.com/binarymillenium/SOREBx9fjyI/AAAAAAAAA-g/ZoxSGN41_6g/s400/DSC_7817.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br />This was one of two rfid accelerometers- powered entirely from the rfid antenna, the device sends back accelerometer data to rotate a planet on a computer screen. The range was very limited, and the update rate about 10 Hz. The second device could be charged within range of the field, then be taken out of range and moved around, then brought back to the antenna and download a time history (only 2 Hz now) of measurements taken. The canonical application is putting the device in a shipped package and then reading what it experienced upon receipt.<br /><br /><b>Wireless Resonant Energy</b><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/Qr4_TffooM9hwA3jtpqjiQ"><img src="http://lh3.ggpht.com/binarymillenium/SOREGsDo2AI/AAAAAAAAA-w/_Rp1scC96kM/s400/DSC_7821.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/xV-Ljh6P5GgUoMXJXjhubg"><img src="http://lh4.ggpht.com/binarymillenium/SOREEebu4ZI/AAAAAAAAA-o/uhh87c82Noo/s400/DSC_7819.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br />This has had plenty of coverage elsewhere but is very cool to see in person. Currently moving the receiver end more than an inch forward or back or rotating it causes the light buld to dim and go out.<br /><br /><b>Scratch Interface</b><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/jnvrjTRcnShLBJD8JA6fOA"><img src="http://lh4.ggpht.com/binarymillenium/SOREJpxJbPI/AAAAAAAAA_A/8z1xpLEotag/s400/DSC_7825.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br />A simple interface where placing a microphone on a surface and then tapping on the surface in distinct ways can be used to control a device. Also a very simple demo of using opencv face tracking to reorient a displayed video to correct for distortion seen when viewing a flat screen from an angle.<br /><br /><b>Look around the building</b><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/n0NOSAeeYb3_HmCu4Wj0Ew"><img src="http://lh5.ggpht.com/binarymillenium/SORDxpPCmqI/AAAAAAAAA9U/cDVR86Ei1cM/s400/DSC_7804.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/wNse03Iv1ZoySptIKgKGLg"><img src="http://lh3.ggpht.com/binarymillenium/SOREk193RsI/AAAAAAAABA0/EDoTmJGsNpE/s400/DSC_7839.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br /><table style="width:auto;"><tr><td><a href="http://picasaweb.google.com/lh/photo/vykS5nGqAKJ-049yJcsylA"><img src="http://lh4.ggpht.com/binarymillenium/SOREnSDqRLI/AAAAAAAABA8/s_0O6vvQZU8/s400/DSC_7840.jpg" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="http://picasaweb.google.com/binarymillenium/20081001IntelResearchSeattle">2008.10.01 Intel Research Seattle</a></td></tr></table><br /><br />I found myself walking in a circle and not quite intuitively feeling I had completed a circuit when I actually had.<br /><br /><br />Also see <a href="http://www.xconomy.com/boston/2008/10/02/personal-robots-home-sensing-private-networks-and-more-from-intel-research-seattles-open-house/attachment/wirelesspower/">another article about this</a>, and Intel's <a href="http://www.flickr.com/photos/intelphotos/sets/72157607675418748/">flickr photos</a>, and a <a href="http://www.wherearejohnandtodd.com/?p=278">a video</a>.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-6117933513931773536?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-79717440723557246662008-09-26T21:08:00.000-07:002008-09-26T21:12:49.282-07:002008-09-26T21:12:49.282-07:00initialization discards qualifiers from pointer target typeA const may be needed, the following produces the error:<br /><code><br /> PixelPacket *p = AcquireImagePixels(image,0,y,image->columns,1,&image->exception);<br /></code><br /><br />while this fixes it:<br /><code><br /> const PixelPacket *p = AcquireImagePixels(image,0,y,image->columns,1,&image->exception);<br /></code><br /><br />In other news hopefully soon I'll have an ARToolkit app for reading in jpegs using ImageMagick, and also that app will have some other more exciting attributes.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-7971744072355724666?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-64212535827533821912008-09-08T22:24:00.000-07:002008-09-09T20:49:55.422-07:002008-09-09T20:49:55.422-07:00Increased Dynamic Range For Depth Maps, and Collages in Picasa 3<object width="400" height="225"> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="movie" value="http://www.vimeo.com/moogaloop.swf?clip_id=1604343&amp;server=www.vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" /> <embed src="http://www.vimeo.com/moogaloop.swf?clip_id=1604343&amp;server=www.vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object><br /><a href="http://www.vimeo.com/1604343?pg=embed&amp;sec=1604343">360 Vision + 3rd Person Composite</a> from <a href="http://www.vimeo.com/user168788?pg=embed&amp;sec=1604343">binarymillenium</a> on <a href="http://vimeo.com?pg=embed&amp;sec=1604343">Vimeo</a>.<br /><br />After I compressed the above video into a WMV I was dissatisfied with how little depth detail there is in the 360 degree vision part - it's the top strip. I initially liked the cleaner single shade look, but now I realize the utility of using a range of colors for depth fields (or IR/thermal imaging also)- it increases the number of colors to represent different depths beyond 256 to a larger number. Earlier I tried using one color channel for higher order bits and another for lower order bits (so the depth could be computed like red*256+green) for a total of 256*256 depth levels (or even 256^3 or 256^4 using alpha), but visually it's a mess.<br /><br />But visual integrity can be maintained while multiplying that 256 levels by five or a bit more with additional work.<br /><br />Taking six colors, three of them are pure red, green, blue, and inbetween there is (255,255,0) for yellow and the other two pure combinations of two channels. Between each subsequent set there can be 256 interpolated values, and in the end a color bar like the follow is generated with 1280 different values:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_apbW4BHhLTg/SLTQVBl2J1I/AAAAAAAAAdg/jtd2VPXQSjQ/s1600-h/colorrange.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="http://1.bp.blogspot.com/_apbW4BHhLTg/SLTQVBl2J1I/AAAAAAAAAdg/jtd2VPXQSjQ/s400/colorrange.png" alt="" id="BLOGGER_PHOTO_ID_5239041326161733458" border="0" /></a><br /><br /><img src="http://chart.apis.google.com/chart?cht=lc&chco=ff0000,0000ff,00ff00&chd=t:100,0,0,0,100,100|100,100,100,0,0,0|0,0,100,100,100,0&chs=400x120&chl=purple|blue|lightblue|green|yellow|red&chls=5,1,0|5,1,0|5,1,0&chxs="></img><br /><br />The bottom color bar shows the differences between adjacent values- if the difference was none then it would be black in spots, so my interpolation is verified.<br /><br />Applying this to the <A href="http://binarymillenium.googlecode.com/files/velodyne.zip">lidar data</a>, I produced a series of images with a <a href="http://binarymillenium.googlecode.com/svn-history/r286/trunk/processing/velosphere/color_spectrum.pde">processing project</a>:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_apbW4BHhLTg/SMYH6CXwnSI/AAAAAAAAApM/3MfaArHHDrU/s1600-h/frames.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="http://1.bp.blogspot.com/_apbW4BHhLTg/SMYH6CXwnSI/AAAAAAAAApM/3MfaArHHDrU/s400/frames.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5243887509769854242" /></a><br /><br />After making all the images I tried out Picasa 3 to produce a collage- the straightforward grid makes the most sense here. Picasa 3 crashed a few times in the collage editor before I was able to get this exported.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-6421253582753382191?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com2tag:blogger.com,1999:blog-28093388.post-29264873700769677432008-08-31T07:50:00.000-07:002008-10-08T07:19:08.393-07:002008-10-08T07:19:08.393-07:00Photosynth Export Process TutorialIt looks like I have unofficial recognition/support for my export process, but I get the feeling it's still too user unfriendly:<br /><a href="http://getsatisfaction.com/livelabs/topics/pointcloud_exporter"><br />http://getsatisfaction.com/livelabs/topics/pointcloud_exporter</a><br /><br /><b>What to do</b><br /><br />Get Wireshark <a href="http://www.wireshark.org/">http://www.wireshark.org/</a><br /><br />Allow it to install the special software to intercept packets.<br /><br />Start Wireshark. Put<br /><code><br />http.request<br /></code><br />into the filter field.<br /><br />Quit any unnecessary network activity like playing youtube videos- this will dump in a lot of data to wireshark that will making finding the bin files harder.<br /><br />Open the photosynth site in a browser. Find a synth with a good point cloud, it will probably be one with several hundred photos and a synthiness of > 70%. There are some synths that are 100% synthy but have point clouds that are flat billboards rather than cool 3D features- you don't want those. Press p or hold ctrl to see the underlying point cloud.<br /><br />Start a capture in Wireshark - the upper left butter and then click the proper interface (experiment if necessary).<br /><br />Hit reload on the browser window showing the synth. Wireshark should then start show ing what files are being sent to your computer. Stop the capture once the browser has finished reloading. There may be a couple screen fulls but near the bottom should be a few listings of bin files.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_apbW4BHhLTg/SLqwNkkv5xI/AAAAAAAAAeM/RDZG47PsOyk/s1600-h/wiresharkbin.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="http://2.bp.blogspot.com/_apbW4BHhLTg/SLqwNkkv5xI/AAAAAAAAAeM/RDZG47PsOyk/s400/wiresharkbin.png" alt="" id="BLOGGER_PHOTO_ID_5240694863601592082" border="0" /></a><br /><br />Select one of the lines that shows a bin file request, and right-click and hit Copy | Summary (text). Then in a new browser window paste that into the address field. Delete the parts before and after /d8/348345348.../points_0_0.bin. Look back in Wireshark to discover what http address to use prior the that- it should be http://mslabs-nnn.vo.llnwd.net, but where nnn is any three digit number. TBD- is there a way to cut and paste the fully formed url less manually?<br /><br />If done correctly hit return and make the browser load the file- a dialog will pop up, save it to disk. If there were many points bin files increment the 0 in the file name and get them all. If you have cygwin a bash script works well:<br /><code><br />for i in `seq 0 23`<br />do<br />wget http://someurl/points_0_$i.bin<br />done<br /></code><br /><br /><b>Python</b><br /><br />Install python. If you have cygwin installed the cygwin python with setup.exe, otherwise http://www.python.org/download/ and download the windows installer version.<br />*** UPDATE *** It appears the 2.5.2 windows python doesn't work correctly, which I'll look into- the best solution is to use Linux or Cygwin with the python that can be installed with Linux ***<br /><br />Currently the script <a href="http://binarymillenium.googlecode.com/svn/trunk/processing/psynth/bin_to_csv.py">http://binarymillenium.googlecode.com/svn/trunk/processing/psynth/bin_to_csv.py</a> works like this from the command line:<br /><code><br />python bin_to_csv.py somefile.bin > output.csv<br /></code><br /><br />But I think the '>' will only work with cygwin and not the windows command prompt. I'll update the script to optionally take a second argument that is the output file.<br /><br />If there are multiple points bin files it's easy to do another bash loop to process them all in a row, otherwise manually do the command above and create n different csvs for n bin files, and then cut and paste the contents of each into one complete csv file.<br /><br />The output will be file with a long listing of numbers, each one looks like this:<br /><code><span style="font-size:78%;"><br />-4.17390823, -1.38746762, 0.832364499, 24, 21, 16<br />-4.07660007, -1.83771312, 1.971277475, 17, 14, 9<br />-4.13320493, -2.56310105, 2.301105737, 10, 6, 0<br />-2.97198987, -1.44950056, 0.194522276, 15, 12, 8<br />-2.96658635, -1.45545017, 0.181564241, 15, 13, 10<br />-4.20609378, -2.08472299, 1.701148629, 25, 22, 18</span><br /></code><br />The first three numbers are the xyz coordinates of a point, and the last three is the red, green, and blue components of the color. In order to get a convention 0-255 number for each color channel red and blue would have to be multiplied by 8, and green by 4. The python script could be easily changed to do that, or even convert the color channels to 0.0-1.0 floating point numbers.<br /><br /><b>Point Clouds - What Next?</b><br />The processing files here can use the point clouds:<br /><a href="http://binarymillenium.googlecode.com/svn/trunk/processing/psynth/">http://binarymillenium.googlecode.com/svn/trunk/processing/psynth/</a><br /><br />Also programs like <a href="http://meshlab.sourceforge.net/">Meshlab</a> can use them with some modification- I haven't experimented with it much but I'll look into that and make a post about it.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-2926487370076967743?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com38tag:blogger.com,1999:blog-28093388.post-69653334093697694082008-08-28T20:59:00.001-07:002008-08-29T07:13:16.101-07:002008-08-29T07:13:16.101-07:00Color Correction<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_apbW4BHhLTg/SLd03jQMEEI/AAAAAAAAAdw/RYzOsU11ULA/s1600-h/sphinx3.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="http://4.bp.blogspot.com/_apbW4BHhLTg/SLd03jQMEEI/AAAAAAAAAdw/RYzOsU11ULA/s400/sphinx3.jpg" alt="" id="BLOGGER_PHOTO_ID_5239785189173628994" border="0" /></a><br /><br />I have the colors figured out now: I was forgetting to byteswap the two color bytes, and after that the rgb elements line up nicely. And it's 5:6:5 bits per color channel rather than 4 as I thought previously, thanks to Marvin <a href="http://binarymillenium.blogspot.com/2008/08/exporting-point-clouds-from-photosynth.html">who commented below</a>.<br /><br />The sphinx above looks right, but earlier the boxer shown below looked so wrong I colored it falsely to make the video:<br /><br /><br /><object width="400" height="225"> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=1619784&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" /> <embed src="http://vimeo.com/moogaloop.swf?clip_id=1619784&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object><br /><a href="http://vimeo.com/1619784?pg=embed&amp;sec=1619784">The Boxer - Photosynth Export</a> from <a href="http://vimeo.com/user168788?pg=embed&amp;sec=1619784">binarymillenium</a> on <a href="http://vimeo.com?pg=embed&amp;sec=1619784">Vimeo</a>.<br /><br />But I've fixed the boxer now:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_apbW4BHhLTg/SLeCUCeOX1I/AAAAAAAAAeA/w9z7O58tL30/s1600-h/boxer.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="http://4.bp.blogspot.com/_apbW4BHhLTg/SLeCUCeOX1I/AAAAAAAAAeA/w9z7O58tL30/s400/boxer.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5239799972241497938" /></a><br /><br />The <a href="http://binarymillenium.googlecode.com/svn/trunk/processing/psynth/bin_to_csv.py">python script is updated</a> with this code:<br /><br /><code><br /> bin.byteswap()<br /> red = (bin[0] >> 11) & 0x1f<br /> green = (bin[0] >> 5) & 0x3f<br /> blue = (bin[0] >> 0) & 0x1f<br /></code><div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-6965333409369769408?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com5tag:blogger.com,1999:blog-28093388.post-49143687334929881002008-08-27T10:28:00.000-07:002008-08-28T22:06:45.418-07:002008-08-28T22:06:45.418-07:00Exporting Point Clouds From Photosynth<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_apbW4BHhLTg/SLWXYydOO2I/AAAAAAAAAdo/xcT0DtU3z_8/s1600-h/sphinx.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="http://1.bp.blogspot.com/_apbW4BHhLTg/SLWXYydOO2I/AAAAAAAAAdo/xcT0DtU3z_8/s400/sphinx.png" alt="" id="BLOGGER_PHOTO_ID_5239260193632435042" border="0" /></a><br />Since my last post about photosynth I've revisited the site and discovered that the pictures can be toggled off with the 'p' key, and the viewing experience is much improved given there is a good point cloud underneath. But what use is a point cloud inside a browser window if it can't be exported to be manipulated into random videos that could look like all the lidar videos I've made, or turned into 3D meshes and used in Maya or any other program?<br /><br /><a href="http://getsatisfaction.com/livelabs/topics/3d_export_in_various_formats">Supposedly export will be added in the future</a>, but I'm impatient like one of the posters on that thread so I've gone forward and figured out my own export method without any deep hacking that might violate the terms of use.<br /><br />Using one of those programs to intercept 3D api calls might work, though maybe not with DirectX or however the photosynth browser window is working. What I found with Wireshark is that http requests for a series of points_m_n.bin files are made. The m is the group number, if the photosynth is 100% synthy then there will only be one group labeled 0. The n splits up the point cloud into smaller files, for a small synth there could just be points_0_0.bin.<br /><br />Inside each bin file is raw binary data. There is a variable length header which I have no idea how to interpret, sometimes it is 15 bytes long and sometimes hundreds or thousands of bytes long (though it seems to be shorter in smaller synths).<br /><br />But after the header there is a regular set of position and color values each 14 bytes long. The first 3 sets of 4 bytes are the xyz position in floating point values. In python I had to do a byteswap on those bytes (presumably from network order) to get them to be read in right with the readfile command.<br /><br />The last 2 bytes is the color of the point. It's only 4-bits per color channel, which is strange. The first four bits I don't know about, the last three sets of 4 bits are red, blue, and green. Why not 8-bits per channel, does the photosynth process not produce that level of precision because it is only loosely matching the color of corresponding points in photos? Anyway as the picture above shows I'm doing the color wrong- if I have a pure red or green synth it looks right, but maybe a different color model than standard rgb is at work.<br /><br />I tried making a photosynth of photos that were masked to be blue only- and zero synthiness resulted - is it ignoring blue because it doesn't want to synth up the sky in photos?<br /><br />Anyway <a href="http://binarymillenium.googlecode.com/svn/trunk/processing/psynth/bin_to_csv.py">here is the python script for interpreting the bin files</a>.<br /><br />The sceneviewer (taken from the Radiohead sceneviewer) in that source dir works well for displaying them also.<br /><br />Anyway to repeat this for any synth wireshark needs to figure out where the bin files are served from (filter with http.request), and then they can be downloaded in firefox or with wget or curl, and then my script can be run on them, and processing can view them. The TOC doesn't clearly specify how the point clouds are covered so redistribution of point clouds, especially those not from your own synths or someone who didn't CC license it, may not be kosher.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-4914368733492988100?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com8tag:blogger.com,1999:blog-28093388.post-13386704665627471342008-08-24T00:50:00.000-07:002008-08-24T00:55:11.715-07:002008-08-24T00:55:11.715-07:00More python pcap with pcapyAfter running into pcap files several hundreds of megabytes in size that caused wireshark to crash when loaded, I returned to trying to make python work with the source pcap file:<br /><br /><code><br />import pcapy<br /><br />vel = pcapy.open_offline('unit 46 sample capture velodyne area.pcap')<br /><br />vel<br />Reader object at 0xb7e1b308<br /><br />pkt = vel.next<br /><br />pkt<br />built-in method next of Reader object at 0xb7e1b308<br /><br /></code><br />What is a Reader object, and a built-in method of it? Why are the addresses the same?<br /><br />try<br /><code><br />pkt = vel.next()<br /><br />type(vel)<br />type 'tuple'<br /><br />vel[1]<br /><br /><br />'\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x08\x00E\x00\x04\xd2<br />\x00\x01\x00\x00\x80\x11\xad\x9f\xc0\xa8\x03+\xc0\xa8\x03\xff\x01\xbb<br />\t@\x04\xbe\x00\x00\xff\xddz\x13\xee<br />...<br />\x00\x00\x10\xd63Md\xc2\xff\x00\x00\x0fp\x07v25b'<br /></code><br /><br />So that's just the problem I was running into before: '\xff' is an ascii representation of binary data that's really 0xff. But I'm given it in ascii- I can't index into this and get a specific 8-bit 0-255 binary value, I get a '\'. Do I have to write something that goes through this and reinterpets the ascii-ized hex back into real hex?<br /><br />Also I note the ff dd that marks the beginning of the data frame is there but not at the beginning- so there are other parts of the packet here I need to get rid of. Is this where I need Impacket?<br /><br />import impacket<br />from impacket.ImpactDecoder import EthDecoder<br /><code><br />decoder = EthDecoder()<br />b = decoder.decode(a)<br />Traceback (most recent call last):<br />File "stdin", line 1, in ?<br />File "/var/lib/python-support/python2.4/impacket/ImpactDecoder.py", line 38, in decode<br /> e = ImpactPacket.Ethernet(aBuffer)<br />File "/var/lib/python-support/python2.4/impacket/ImpactPacket.py", line 340, in __init__<br /> self.load_header(aBuffer)<br />File "/var/lib/python-support/python2.4/impacket/ImpactPacket.py", line 255, in load_header<br /> self.set_bytes_from_string(aBuffer)<br />File "/var/lib/python-support/python2.4/impacket/ImpactPacket.py", line 59, in set_bytes_from_string<br /> self.__bytes = array.array('B', data)<br />TypeError: an integer is required<br /></code><br />oops<br /><code><br />b = decoder.decode(a[1])<br />print b<br />Ether: 0:0:0:0:0:0 -> ff:ff:ff:ff:ff:ff<br />IP -><br />UDP 443 -> 2368<br /><br />ffdd 7a13 ee09 3cbe 093f 1811 2b1f 1020 ..z.....?..+..<br />0a0a 63ac 0848 d708 53ea 085b 0000 1bc0 ..c..H..S..[....<br />0a35 0c09 425b 0936 000d 2e44 0d2f 120b .5..B[.6...D./..<br />4f5e 0b30 200f 3c4e 0e50 c30b 46c7 0b4d O^.0 .<br /><n.p..f..m></n.p..f..m></code><br />The decode makes a nice human readable text of the packet, but not what I want.<br /><br />Here is a different tack- by looking in the Impacket.py source I found how to do this which converts that annoying ascii back to real bytes, which is the only real issue:<br /><code><br />mybytes = array.array('B', vel[1])<br /></code><br />mybytes is of size 1248, so there appear to be 42 extra bytes of unwanted ethernet wrapper there- why not just index into mbytes like mybytes[42:] and dump that to a binary file?<br /><br />I don't know about the dumping to binary file (print mybytes prints it in ascii, not binary)- but I could easily pass that array straight into the velodyne parsing code- and this skips the intermediate file step, saving time and precious room on my always nearly full laptop hd.<br /><br />So <a href="http://code.google.com/p/binarymillenium/source/browse/trunk/processing/velodyne/pcap_to_csv.py?r=275">here is the final result</a>, which produce good CSVs I was able to load with my 'velosphere' Processing project to create 360 degree panoramas from the lidar data:<br /><br />Next I need a way to write pngs from python, and I could eliminate the CSVs &amp; Processing step.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-1338670466562747134?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com2tag:blogger.com,1999:blog-28093388.post-51517438007631997322008-08-23T06:31:00.000-07:002008-08-29T07:13:41.060-07:002008-08-29T07:13:41.060-07:00PhotosynthWhen I first saw the original demo I was really impressed, but now that is been released I feel like it hasn't advanced enough since that demo to really be useful. I tried a few random synths when the server was having problems, it looks like it isn't being hammered any longer so I ought to try it again soon when I'm using a compatible OS.<br /><br />Overall it's confused and muddled to use and look at- like a broken quicktime VR.<br /><br />Photosynth seems to work best in terms of interface and experience when it is simply a panoramic viewer of stitched together images- where all the images are taken from a point of buildings or scenery around the viewer. It's easy to click left or right to rotate left or right and have the view intuitively change. But we've had photostitching software that produces smooth panoramas that look better than this for years, so there's nothing to offer here.<br /><br />When viewing more complicated synths, the UI really breaks down. I don't understand why when I click and drag the mouse the view rotates to where I'd like, but then it snaps back to where it used to be when I let go of the button. It's very hard to move naturally through 3D space- I think the main problem is that the program is too photo-centric: it always wants to feature a single photograph prominently rather than a more synthetic view. Why can't I pull back to view all the photos, or at least a jumble of outlines of all the photos?<br /><br />It seems like there is an interesting 3D point cloud of points found to be common to multiple photos underlying the synth but it can't be viewed on it's own (much less downloaded...), there are always photos obscuring it. The photograph prominence is constantly causing nearby photos to become blurry or transparent in visually disruptive ways.<br /><br />Finally, it seems like the natural end-point of technology like this is to generate 3D textured models of a location, with viewing of the source photos as a feature but not the most prominent mode. Can this be done with photosynth-like technology or is all the aspects I don't like a way of covering up that it can't actually do that? Maybe it can produce 3D models but they all come out horribly distorted (so then provide a UI to manually undistort them).<br /><br />Hopefully they will improve on this, or another well-backed site will deliver fully on the promise shown here.<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-5151743800763199732?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com0tag:blogger.com,1999:blog-28093388.post-36632938753336493722008-08-20T08:48:00.000-07:002008-08-23T06:30:56.496-07:002008-08-23T06:30:56.496-07:00Makeavi<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_apbW4BHhLTg/SKw-ZV-jUWI/AAAAAAAAAdQ/5H3fUabAGTQ/s1600-h/prepross_height_10046.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="http://3.bp.blogspot.com/_apbW4BHhLTg/SKw-ZV-jUWI/AAAAAAAAAdQ/5H3fUabAGTQ/s400/prepross_height_10046.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5236629071842201954" /></a><br />Discovered a neat windows (and vista) tool for turning image sequences into videos: <a href="http://makeavi.sourceforge.net/">http://makeavi.sourceforge.net/</a><br /><br /><br />1280x720 in the 'Microsoft Video 1' format worked well, though 57 MB of pngs turned into 135 MB of video. 'Uncompressed' didn't produce a video just a small 23kb file. 'Intel IYUV' sort of produced a video but not correctly. 'Cinepak' only output a single frame. 'VP60 Simple profile' and 'VP61 Advanced Profile' with the default settings worked, and actually produces video smaller than the source images, though quicktime player didn't like those files. Vimeo seems to think VP61 is okay:<br /><br /><object width="400" height="225"> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="movie" value="http://www.vimeo.com/moogaloop.swf?clip_id=1570449&amp;server=www.vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" /> <embed src="http://www.vimeo.com/moogaloop.swf?clip_id=1570449&amp;server=www.vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA&amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object><br /><a href="http://www.vimeo.com/1570449?pg=embed&amp;sec=1570449">More Velodyne Lidar - overhead view</a> from <a href="http://www.vimeo.com/user168788?pg=embed&amp;sec=1570449">binarymillenium</a> on <a href="http://vimeo.com?pg=embed&amp;sec=1570449">Vimeo</a>.<br /><br />This new video is similar to the <a href="http://binarymillenium.blogspot.com/2008/08/animated-gif-of-height-map.html">animated gifs I was producing earlier</a>, but using a new set of data. Vimeo seems to be acting up this morning, I got 75% through an upload of the entire file (the above is just a subset) and it locked up. I may try to produce a shorter test video to see if it works.<br /><br />I have around 10 gigs of lidar data from Velodyne, and of course no way to host it.<br /><br />My process for taking pcap files and exporting the raw data has run into a hitch- wireshark crashes when trying to 'follow udp stream' for pcap files larger than a couple hundred megabytes. Maybe there is another tool that can do the conversion to raw?<div class="blogger-post-footer"><img width='1' height='1' src='http://res1.blogblog.com/tracker/28093388-3663293875333649372?l=binarymillenium.com'/></div>binarymilleniumhttp://www.blogger.com/profile/17419830604356775608noreply@blogger.com1