-
Notifications
You must be signed in to change notification settings - Fork 27
Histogram targetid #53
base: master
Are you sure you want to change the base?
Conversation
…gram is very slow. Need help in optimization.
Two main things I'd like to point out is that there is a filter class which you should be sub-classing (allows us to access all filters with a common interface), and that you should use an incremental average histogram stored inside the filter and update it every time you read a frame, rather than recreating it each time. That way it will be much faster, and then for testing you just run it on the entire data set first so that you have a useful average histogram before running it once on the image you want to test. Also, if you update your branch there's now a bunch of sample images in the testdata folder which are very much like the image data we'll be getting at the competition which would be more useful for testing with, as well as the interactive tester that you might find useful (though maybe I should write something up at some point on how to use it). |
@benjaminwinger What about use constructor to generate the initial histograms as well as the average histogram, then use the filter() method to detect target in a specific image? |
@benjaminwinger I also find that my code cannot handle the new test photos well... The problem is, most pictures have a largely varying white area, which resulting in large "peaks" on white colour, while the red targets give very insignificant "peaks". I don't know how to filter out the white part while still getting the red targets. |
The idea is that this is a fast algorithm that can be used in real time. There is no data that can be preprocessed in the constructor. You have to use preprocessing for testing because otherwise it would only be run on a single image (which is the worst case for the algorithm). |
@benjaminwinger What should I do if there is no target in the input Mat? Return a NULL pointer or return a black Mat? |
…ved preprocessing part, now the code calculates avg histogram every time a new Mat is taken.
You should check the documentation and see what Filters are supposed to do. While conceivably we could have it return null to signify that there's nothing to find and then skip any other processing that would be performed (which would be fairly trivial for a blank image anyway), that would mess up some existing stuff since we've been assuming to this point that the returned Mat is non-null, and it would be simpler anyway if you return an empty Mat that should achieve the same performance gain and then we don't have to account for null a null pointer result. |
@benjaminwinger I don't know why the build fails on github... The codes work fine when I make it locally. |
Look at the test results on the ARM server; it's not the build that's failing. You're included calls to a graphical API (highgui) in the test you added, causing it to raise an exception. Unit tests need to be automated. Now for properly testing object detection code we can't do a traditional test, since it's hard to really say when a test should pass. I've been meaning for some time to set up a helper for benchmarking object detection code which compares the overlap of the contour results with a hard-coded contour for the image, as well as times the function call, but I won't get around to doing that until the beginning of next week at the earliest. For now You could just do the same as what I'm doing for the other test (i.e. outputting the resulting contours manually just so that we can see that the results are sane), and you may as well add it to the same file too, though you'll have to make it pass in the photos folder instead of a specific photo as a command line arg so that you can read more than just the one photo. |
@benjaminwinger I am outputting the results manually because I do not know what should I compare them to... There is no BOOST_CHECK() in the hist_test and it should pass unconditionally. |
The test is failing due to the unhandled exception, not due to a test that doesn't pass. Displaying an image to the screen is not a test and Jenkins can't handle it anyway since it's not running from within the X server (and no-one's there to look at it anyway). |
pack->input_r(i); | ||
} | ||
} | ||
Mat* result=new Mat(src); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't copy the image. You want the Mat::clone function. The Mat object itself is just a container around an internal data structure and the copy constructor just makes a shallow copy in this case, meaning that you're modifying the original image.
I think it's worth pointing out that static thresholding isn't really peak detection, and as you've mentioned peak detection doesn't really work here since it doesn't really pick up the targets. Peak detection is actually better suited to detecting the background than it is to detecting the targets. Note that my original idea for recreating the image is actually rather different from peak detection. It looks at the difference in the given colours from the average and uses that to create a multiplier. The multiplier can actually then be curved by raising it to a few powers, ensuring that anything that has already appeared in the image an appreciable amount gets reduced drastically while anything that has barely appeared at all still shows up prominently. Also you probably want to apply some blurring to the images. Since we're largely working with average colours rather than the colours of individual pictures it can help to blur images before processing them to prevent extremely small objects (i.e. a few pixels) from being identified. This becomes especially prominent when thresholding since you'll get values that are close to the threshold appearing as small isolated blobs of colour, but also when dealing with histograms you're more likely to get a clean, smooth histogram if you blur the image a little first (see the OpenCV smoothing tutorial). |
@benjaminwinger Do you mean that I can just use hue in HSV format as the "single channel"? Also can you explain the "multiplier" part a bit? I don't understand what you are talking about... |
…ctures, while eliminating target or failing to reduce background with others. help wanted.
n.push_back(3); | ||
string telemetry_path=boost::unit_test::framework::master_test_suite().argv[2]; | ||
string filePath=boost::unit_test::framework::master_test_suite().argv[1]; | ||
ofstream fout(telemetry_path); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a couple of things I don't like about outputting this csv.
First of all It's not actually necessary for a test that's got nothing to do with the metadata.
But also you're putting it in a fixed-path source directory (and then of course the generated file gets included in the commit), if anything it should go with the generated build files. I would point out however that we do have the actual telemetry logs for the photos in the testdata directory. It would make more sense to include them and use them instead of generating placeholder files.
Could you maybe get rid of the 18 copies of a picture of the floor? You're not using them anymore and they're not terribly useful for testing. |
It's worth pointing out that using the contour comparison function with one of the two contours being empty will always return 0. That's why a bunch of the checks are failing: you're checking that all of the results are non-zero, but for any check where you don't expect to find any objects, the result will be zero. |
Two things, the first being a reminder that GUI code will break automated tests. Secondly it's great that you put together a set of tests for all the test images, but since you're testing so many images it would make the output easier to read and compare with other test runs if you reported which image you're testing when you print the result message. |
No description provided.