Sunday, May 31, 2009

Presentation Practice

All our visual team members met at 3.00pm today in colloquium room. We gave two demo presentations and made some comments to the presentation. We edited our slides and looked at the timing of the presentations.

We are hopeful about giving a good presentation on Wednesday.

Preparation for Presentation

We all met last Friday and talked almost an hour and half regarding our team's presentation. We figured out what we are going to talk about.

We decided that we all should talk. And we would focus on the following points.

*Goal

*Problem Space

*Solution

*Open Problems

Wednesday, May 27, 2009

Final Turn-in

Peter just joined. And we cleaned the project and then Peter turned in.

Note: In the tech doc we also added open problems and future scopes for future developers.

We are happy what we turned in. Hope this would help to analyze a video for research work.

Thanks

Final Doc Revision

Me and Fan did the User and Tech revision from 9.00am this morning until 11.30am. We changed some part the docs to make more sense. We went line by line to check thoroughly. But still there might be some problems.

Revised the ReadMe file.

It was little cumbersome and take time.

GUI / Docs revision and Test Build

We worked from 4.00pm to 9.00pm for bug fixes and polishing. Me and Kyle did the GUI revision. The Docs and tutorials now open on browser. We were having issues like in Mac Docs were not opening in browsers. We fixed those now all those docs and tutorials open in Mac and widows. We have a video tutorial that use flash. One has to have flash player installed to view the tutorial.

Later Me and Fan also did the Docs revision.

I did some test build and tested with Mac and Windows. Works in both OS.

Tuesday, May 26, 2009

Cnetroids for frames with no fingers

Kyle was proposing putting (0,0) for frames with no fingers within the frame.
But still (0,0) is inside the image grid. Perhaps (-1,-1) something for sure outside the grid.
And we finally ended up putting (-1,-1) for centroids that were not found in the frame.


Frame Rate and Need to be in XML file ?

Well I think we do not need 30fps or even 20 fps rather 10 fps should be enough as finger does not moves as fast as eyes. Even eyes should be OK with 10 fps to track changes. I think higher fps would slow down the tracking. Using 30 fps is something like we have unnecessary points to draw a straight line which can be actually drawn with 2 points. Don’t prove our program is a slower one. :)

Actually I think we do not need more than 10 fps and can make it as our suggested standard.
If someone want to use our program then they can convert video to whatever fps they use.

Again the problem comes like we are making our program fixed with 30fps. In case some one try testing a video of 60fps or 15 fps still our xml file should have time = 1/30 * frameNo not 1/15 or 1/60 .

So is there any way picking up fps from video that someone feed to our program ?

If no then … use a standard small fps which is sufficient to analyze data without loosing data and do not take much time to track with our program.

......

Fan was wondering whether we need to put frame rates in XML file ?

My Ans is:

We do not need to store frame rate. If we can pick it up from whatever video we open to process then
Calculate time t = (1/frame_rate) * frameNo and put in your XML file.

By this way XML file should have accurate time for a video with whatever fps it has.

Frame Rate and XML file Test

Test: 1finger.avi
Video Time : 14 sec video
No of Frames : 99

Now if it is true that the video is having 30 fps (frames per sec) when converted to uncompressed AVI then:
No of frames should be = 30 fps * 14 sec = 420 frames but actually it is 99 frames stacked into the program stack.
I think the frames per second is not 30 or may be I am missing something.

Correct me if I am wrong in calculation. We need to figure it out before we give the XML file to signal team and this would help Fan "what to put in time attribute in each event element". Right now she is printing frame no.
here 1 means frame no 1 which does not show the right meaning.

We need to come up with some calculations so that number of frames stacked into the program and fps make sense to produce some values to put in time attribute in each event element of any xml file.

I wont worry about picking up time from video. That is a long term goal and Fan is putting the the time when video is processed and some what make sense. But if we can not relate No of frames produced with video length then the signal team wont be able to analyze our XML file to produce anything meaningful and findings would be meaning less.

Lets work on it. Looking forward to the "video sub team(Peter/Fan)"

-------->
The tested frame was actually 7fps and so 7 X 14 =98 was OK.

General GUI look change

Finger Map Image was opening in a separate window. And I now have idea like this.

How about opening the Finger Map Window in main window (Finger Tracking) rather than in a separate window.

And making the Finger map window smaller (may be by resizing the Finger ID image).

I have an idea like

When the main window opens up –
One the left side put the Finger ID image in one panel and Some Quick start Messages on the right panel(may be console panel is ok with initial quick start messages on it)

When the user clicks on View menu they can make either of these two panels in visible.

Message Panel (or the console) might start with —

1. Choose Video to track Finger from "File" —> Open
2. Select Finger From Combo
3. Click on the particular Finger tip to track from Image
4. If you are advanced user "Set Threshold" otherwise keep the default settings for threshold
5. Hit "Go" for start finger tracking
6. Click "Play" to view tracked finger centroids of your video.

New GUI

Our previous GUI was having buttons and some settings in the mainWindow. I was thinking adding menus as we increase some features. I added some and Kyle did some. I proposed that we have a GUI with finger Map on the left and Console on right with some quick start messages. Users can toggle both Console and Finger Map child windows to view or not. We finally implemented the GUI looks like this.






I liked the check mark before the menu items under "View" menu and those indicates whether the option is checked or not. Looks professional :)

Sunday, May 17, 2009

Test with new Multi Finger Tracking

We tested a bigger video file about 2.30 minutes with the new program me and kyle wrote. It could successfully run. :)

However we figured out that this video is not a regular type video. In this video the user did not put their fingers immediately rather delayed couple of seconds before starting exploring the map.

This led us the question :

1)What if the user make delay to explore the map?

Ans: This was a problem because we were loading the first frame to track to give the user an opportunity to select finger tip. Now we decided we can put a slider and load all frames. User would slide to a frame where the finger is visible and click to the finger tip. However we might not need all frames to load at the beginning , I think couple of frames should be enough. This can save time.

Again sometimes users were removing hands from maps so frames with no finger at all are possible.

This led us the question :

2)What if finger moves out of the window, and then comes in at a completely different area?
Ans: We can just put the center as a negative value say , (-1,-1) in our xml file indicating no finger were visible to track in this frame.

We were tracking finger tips in a particular rectangle thinking finger movement won't be that fast and saving search time. But there might be situations user moved out hands to a completely different area or out side the map or moved very fast exceeding the considered search space.

This lead to question:

3) If center is not found the next search space in the frame, then how to search?

Ans: Simple way is to search every pixels in the frame considering the whole pixel grid rather a the previously considered small space. This should work if finger is moved out fast or to a different area. But if finger is outside the frame then put some -ve values as centroid say (-1,-1)

Peer Programing - Test of Multi Finger Tracking with new Algorithm

Me an Kyle sat together to test my proposed algorithm for multi finger tracking. We changed the complete old code where user needed to say which color to track and which color to compare with, what is the value of the tracking color by which it is greater in some extent than other comparing colors. These were hassle and ambiguous for a new user and a user who is also familiar need to remember these sort of values or settings which is a pain. Instead my proposal is that if somehow user can point the exact finger tip s/he is interested with and if we can get the exact pixel color then we can track the exact color. This is simple and very much user friendly.

It worked after we spent around 3 hours. We could trase a finger with my proposed way. Algorithm was simple enough, we just needed to use some thresholding. It worked because we were tracking the exact color of the finger color rather than assuming "red", "blue", "green" combination of the finger.

Monday, May 4, 2009

Multi Finger Tracking with advanced UI

We would be having Menu and different settings in our next version of UI. When the user opens a video the frame should come up and user would select fingers from frame and have the finger ID checked in the corresponding checkBox.

Something to figure out :

1. What if the first frame does not come with fingers on it?
Probable Solution: Let the user slide until there is a frame with finger colored. How many frames would come up then before actual tracking begin? We can load all frames before actual tracking if needed. Best thing would be input a video which start color fingers on it to avoid necessary delay.

2. Let the user choose one finger at a time because finger ID need to match with the finger tip a user selects.

3. Threshold checking to match color pixels to see how pixel's color fluctuates due to shade or anything else. We can also make this feature as an advanced settings for advanced user rather users should use default threshold that our program sets.





There should be play, pause, stop, sliding bar and a progress bar in the second window that comes up when a user feeds a video into the program.

I have an idea of having the UI design done soon.

Features for the Next Version

Needs:

  • Multi - Finger Tracking
  • Good UI -> Settings | Help
  • Finalized XML
  • Good Templated Docs
  • Stress Test/ Nagative Value Test
  • Coordinate Conversion/Scaling


Easy:

  • Progress Bar
  • Help Menu
  • Settings
  • Batch Processing


Hard:

  • Video conversion
  • Image Registration

Friday, May 1, 2009

My proposed Revised algorithm/steps for multi-finger Tracking (Need Modifications)

1. Load the First Frame from the video
2. Select the checkBox of the corresponding finger ID to track
3. Click on the finger tip (of that ID) in the frame to get (X,Y) of that pixel
4. Put that (X,Y) value in InputBox //just to track that you got nothing weird

Repeat step 2-4 for as many as fingers to track.

If done click on the "start tracking"


//Get the color of the finger tip selected by the users
Color R = GetR (X,Y)
Color G = GetG (X,Y)
Color B = GetB (X,Y)

Track Centers (int X, int Y, Color R, Color G,Color B, int frameNo):

//Use threshold with added noise
//We can adjust the range of the considering area instead of 10

Look inside the rectangle (x-10,y-10) , (x+10,y+10)

candidate_pixels //for the pixels having the same color

for (i : x-10 to x+10)
for (j: y-10 to y+10)
if (R-20>getR(i,j)=20
add (i,j) to candidate_pixels

if (pixels not found in candidate_pixels )
track the whole frame from (0,0) to (dimention(x),dimention(y))


Find_center ( candidate_pixels)
center_x = avg(x)
center_y = avg(y)

return (center_x,center_y)


//or get the R,G,B of (X,Y) pixels for the center and call Track_Center for next Frame