Sunday, May 31, 2009

Presentation Practice

All our visual team members met at 3.00pm today in colloquium room. We gave two demo presentations and made some comments to the presentation. We edited our slides and looked at the timing of the presentations.

We are hopeful about giving a good presentation on Wednesday.

Preparation for Presentation

We all met last Friday and talked almost an hour and half regarding our team's presentation. We figured out what we are going to talk about.

We decided that we all should talk. And we would focus on the following points.

*Goal

*Problem Space

*Solution

*Open Problems

Wednesday, May 27, 2009

Final Turn-in

Peter just joined. And we cleaned the project and then Peter turned in.

Note: In the tech doc we also added open problems and future scopes for future developers.

We are happy what we turned in. Hope this would help to analyze a video for research work.

Thanks

Final Doc Revision

Me and Fan did the User and Tech revision from 9.00am this morning until 11.30am. We changed some part the docs to make more sense. We went line by line to check thoroughly. But still there might be some problems.

Revised the ReadMe file.

It was little cumbersome and take time.

GUI / Docs revision and Test Build

We worked from 4.00pm to 9.00pm for bug fixes and polishing. Me and Kyle did the GUI revision. The Docs and tutorials now open on browser. We were having issues like in Mac Docs were not opening in browsers. We fixed those now all those docs and tutorials open in Mac and widows. We have a video tutorial that use flash. One has to have flash player installed to view the tutorial.

Later Me and Fan also did the Docs revision.

I did some test build and tested with Mac and Windows. Works in both OS.

Tuesday, May 26, 2009

Cnetroids for frames with no fingers

Kyle was proposing putting (0,0) for frames with no fingers within the frame.
But still (0,0) is inside the image grid. Perhaps (-1,-1) something for sure outside the grid.
And we finally ended up putting (-1,-1) for centroids that were not found in the frame.


Frame Rate and Need to be in XML file ?

Well I think we do not need 30fps or even 20 fps rather 10 fps should be enough as finger does not moves as fast as eyes. Even eyes should be OK with 10 fps to track changes. I think higher fps would slow down the tracking. Using 30 fps is something like we have unnecessary points to draw a straight line which can be actually drawn with 2 points. Don’t prove our program is a slower one. :)

Actually I think we do not need more than 10 fps and can make it as our suggested standard.
If someone want to use our program then they can convert video to whatever fps they use.

Again the problem comes like we are making our program fixed with 30fps. In case some one try testing a video of 60fps or 15 fps still our xml file should have time = 1/30 * frameNo not 1/15 or 1/60 .

So is there any way picking up fps from video that someone feed to our program ?

If no then … use a standard small fps which is sufficient to analyze data without loosing data and do not take much time to track with our program.

......

Fan was wondering whether we need to put frame rates in XML file ?

My Ans is:

We do not need to store frame rate. If we can pick it up from whatever video we open to process then
Calculate time t = (1/frame_rate) * frameNo and put in your XML file.

By this way XML file should have accurate time for a video with whatever fps it has.

Frame Rate and XML file Test

Test: 1finger.avi
Video Time : 14 sec video
No of Frames : 99

Now if it is true that the video is having 30 fps (frames per sec) when converted to uncompressed AVI then:
No of frames should be = 30 fps * 14 sec = 420 frames but actually it is 99 frames stacked into the program stack.
I think the frames per second is not 30 or may be I am missing something.

Correct me if I am wrong in calculation. We need to figure it out before we give the XML file to signal team and this would help Fan "what to put in time attribute in each event element". Right now she is printing frame no.
here 1 means frame no 1 which does not show the right meaning.

We need to come up with some calculations so that number of frames stacked into the program and fps make sense to produce some values to put in time attribute in each event element of any xml file.

I wont worry about picking up time from video. That is a long term goal and Fan is putting the the time when video is processed and some what make sense. But if we can not relate No of frames produced with video length then the signal team wont be able to analyze our XML file to produce anything meaningful and findings would be meaning less.

Lets work on it. Looking forward to the "video sub team(Peter/Fan)"

-------->
The tested frame was actually 7fps and so 7 X 14 =98 was OK.

General GUI look change

Finger Map Image was opening in a separate window. And I now have idea like this.

How about opening the Finger Map Window in main window (Finger Tracking) rather than in a separate window.

And making the Finger map window smaller (may be by resizing the Finger ID image).

I have an idea like

When the main window opens up –
One the left side put the Finger ID image in one panel and Some Quick start Messages on the right panel(may be console panel is ok with initial quick start messages on it)

When the user clicks on View menu they can make either of these two panels in visible.

Message Panel (or the console) might start with —

1. Choose Video to track Finger from "File" —> Open
2. Select Finger From Combo
3. Click on the particular Finger tip to track from Image
4. If you are advanced user "Set Threshold" otherwise keep the default settings for threshold
5. Hit "Go" for start finger tracking
6. Click "Play" to view tracked finger centroids of your video.

New GUI

Our previous GUI was having buttons and some settings in the mainWindow. I was thinking adding menus as we increase some features. I added some and Kyle did some. I proposed that we have a GUI with finger Map on the left and Console on right with some quick start messages. Users can toggle both Console and Finger Map child windows to view or not. We finally implemented the GUI looks like this.






I liked the check mark before the menu items under "View" menu and those indicates whether the option is checked or not. Looks professional :)

Sunday, May 17, 2009

Test with new Multi Finger Tracking

We tested a bigger video file about 2.30 minutes with the new program me and kyle wrote. It could successfully run. :)

However we figured out that this video is not a regular type video. In this video the user did not put their fingers immediately rather delayed couple of seconds before starting exploring the map.

This led us the question :

1)What if the user make delay to explore the map?

Ans: This was a problem because we were loading the first frame to track to give the user an opportunity to select finger tip. Now we decided we can put a slider and load all frames. User would slide to a frame where the finger is visible and click to the finger tip. However we might not need all frames to load at the beginning , I think couple of frames should be enough. This can save time.

Again sometimes users were removing hands from maps so frames with no finger at all are possible.

This led us the question :

2)What if finger moves out of the window, and then comes in at a completely different area?
Ans: We can just put the center as a negative value say , (-1,-1) in our xml file indicating no finger were visible to track in this frame.

We were tracking finger tips in a particular rectangle thinking finger movement won't be that fast and saving search time. But there might be situations user moved out hands to a completely different area or out side the map or moved very fast exceeding the considered search space.

This lead to question:

3) If center is not found the next search space in the frame, then how to search?

Ans: Simple way is to search every pixels in the frame considering the whole pixel grid rather a the previously considered small space. This should work if finger is moved out fast or to a different area. But if finger is outside the frame then put some -ve values as centroid say (-1,-1)

Peer Programing - Test of Multi Finger Tracking with new Algorithm

Me an Kyle sat together to test my proposed algorithm for multi finger tracking. We changed the complete old code where user needed to say which color to track and which color to compare with, what is the value of the tracking color by which it is greater in some extent than other comparing colors. These were hassle and ambiguous for a new user and a user who is also familiar need to remember these sort of values or settings which is a pain. Instead my proposal is that if somehow user can point the exact finger tip s/he is interested with and if we can get the exact pixel color then we can track the exact color. This is simple and very much user friendly.

It worked after we spent around 3 hours. We could trase a finger with my proposed way. Algorithm was simple enough, we just needed to use some thresholding. It worked because we were tracking the exact color of the finger color rather than assuming "red", "blue", "green" combination of the finger.

Monday, May 4, 2009

Multi Finger Tracking with advanced UI

We would be having Menu and different settings in our next version of UI. When the user opens a video the frame should come up and user would select fingers from frame and have the finger ID checked in the corresponding checkBox.

Something to figure out :

1. What if the first frame does not come with fingers on it?
Probable Solution: Let the user slide until there is a frame with finger colored. How many frames would come up then before actual tracking begin? We can load all frames before actual tracking if needed. Best thing would be input a video which start color fingers on it to avoid necessary delay.

2. Let the user choose one finger at a time because finger ID need to match with the finger tip a user selects.

3. Threshold checking to match color pixels to see how pixel's color fluctuates due to shade or anything else. We can also make this feature as an advanced settings for advanced user rather users should use default threshold that our program sets.





There should be play, pause, stop, sliding bar and a progress bar in the second window that comes up when a user feeds a video into the program.

I have an idea of having the UI design done soon.

Features for the Next Version

Needs:

  • Multi - Finger Tracking
  • Good UI -> Settings | Help
  • Finalized XML
  • Good Templated Docs
  • Stress Test/ Nagative Value Test
  • Coordinate Conversion/Scaling


Easy:

  • Progress Bar
  • Help Menu
  • Settings
  • Batch Processing


Hard:

  • Video conversion
  • Image Registration

Friday, May 1, 2009

My proposed Revised algorithm/steps for multi-finger Tracking (Need Modifications)

1. Load the First Frame from the video
2. Select the checkBox of the corresponding finger ID to track
3. Click on the finger tip (of that ID) in the frame to get (X,Y) of that pixel
4. Put that (X,Y) value in InputBox //just to track that you got nothing weird

Repeat step 2-4 for as many as fingers to track.

If done click on the "start tracking"


//Get the color of the finger tip selected by the users
Color R = GetR (X,Y)
Color G = GetG (X,Y)
Color B = GetB (X,Y)

Track Centers (int X, int Y, Color R, Color G,Color B, int frameNo):

//Use threshold with added noise
//We can adjust the range of the considering area instead of 10

Look inside the rectangle (x-10,y-10) , (x+10,y+10)

candidate_pixels //for the pixels having the same color

for (i : x-10 to x+10)
for (j: y-10 to y+10)
if (R-20>getR(i,j)=20
add (i,j) to candidate_pixels

if (pixels not found in candidate_pixels )
track the whole frame from (0,0) to (dimention(x),dimention(y))


Find_center ( candidate_pixels)
center_x = avg(x)
center_y = avg(y)

return (center_x,center_y)


//or get the R,G,B of (X,Y) pixels for the center and call Track_Center for next Frame

Wednesday, April 29, 2009

Can we run batch processing of videos?

While I was showing our first release to Michal, he said it might be a good thing to do if we can do batch processing of video files rather than one each time. To analyze data it is important to have data ready. It make sense to have more videos producing data quick than how interactively we process one video file.

Good though for rest of the term.

FingerTracker_v0.1 - First Release from Visual Team

Tester is someone who is not liked by teammates of a software project. Why ? :)


I was looking at the codes , docs , builds and bugging my teammates to change stuffs after each catch to make a clean release. I won't like someone bugging me. Neither my teammates.

I wrote some unit tests as well and I got one Test failed. Later we fixed that method. Felt unit tests helps but it is much frustrating to find out which test cases to write. It takes quite a big time as well.

Our release was 93MB and after zipped it was 29MB.

Eclipse is nice to make a Runnable jar and I could make it with in a minutes as there is a quick tool to do it with eclipse.

Here is the first release from our visual team.

At the end I must say "we have a really good team"

Tuesday, April 28, 2009

Tech Doc and Read Me

I wrote the ReadMe.txt and TechDoc.html. Fan is going to revise these files and write the UserDoc.html.

I used Microsoft Visio for Flow Chart drawing. Below is the our Visual Team's Finger Tracker Program's structure (Flow Diagram).

Monday, April 27, 2009

What I am doing?

I am going to help Fan with documentation and polishing for the first demo and would do testing trying to crush the program :))

Thoughts for flexible UI control Panel

Right now we are having settings in our control panel where user need to put the finger ID with what color to compare with and working with tracing 3 colors(or tree fingers). I am feeling there is a problem with this,

This one is not flexible and user need to think about what color to track with what finger.
I am thinking of making something like dynamic UI. Say when user open the video, we can have the first frame printed and let the users click on which finger s/he wants to track and Done.

How to do it?

Use mouseDown event and get the coordinate of the pixel of the finger tip, get the colors of that pixel and that is the finger tip we are interested with. Now trace all the other frames in stack.

This should be cool to do it. I am going to work on it later this term.

Video Session with HD cemera and proper settings

I could bring the video camera with tripod from Knight library. Me , Fan and Peter worked with the video session.




We set up the camera from above and covering the whole map with the full screen of the camera.




We marked the points of tripod legs and took some measurements. Note downed the resolutions of the Camera videos, file formats, etc.

Video File Uploading

We were guessing that we might need much bigger space than what assembla.com provides to upload our videos.

I found www.esnips.com This site gives 5GB free space.
Solved our problem ?

Code and Test Documentation

Michal gave a presentation on "Documentation" and at some point Abdul was asking what to put for "code documentation" and I had a similar question in mind for "Test Documentation."

I found this document for "Software Documentation" which gives a general idea what to put for "code and test" documentation.

Code Documentation (CD)
You are expected to fully document your code. Every class and class method should have a name, a brief one-line description, and a detailed description of the algorithm. All methods also require descriptions of all inputs and outputs. If applicable, you should also note any caveats – things that could go wrong or things that the code doesn’t address. Put assumptions in the caveats section. If you are coding in Java, you should use the documentation tags that work with the javadoc utility. This utility automatically generates web pages for your documentation. To make things consistent, simply cut and paste the textual descriptions of your classes, objects, and methods from your OOD directly into the code. Then let javadoc do the dirty work. If you are not coding in Java, you can still use the same tags and see if javadoc operates on your source files. Otherwise, you could write such a utility yourself!

Testing Documentation (TD)
The TD describes how you tested your program to prove that it works sucessfully. You should include testbeds for both the user interface and application aspects of your program. You should also provide test data sets where applicable. For example, if you program does some form of text processing, you should provide example file inputs that test specific features of the program requirements. You should pay special attention to borderline values and bogus user input. The TD should also include evaluation criteria, so you know what you are actually testing for.


http://www.assembla.com/spaces/atmr/documents/aZ9_Eul38r3PpneJe5afGb/download/software_documentation.pdf

My role in the team right now

I have been assigned as General/Unit Tester and side Coder.

I did setup jUnit4 integrated with our project. Wrote some unit Test cases and those test cases passed.

Thanks Michal for lending his co-authored book "Software Testing and Analysis". I wanted to apply some professionalism as a Tester. I could just make a look into the chapters' contents and liked the "Problems and Methods" portion the book especially "Testing Object Oriented Software" and "Test Execution" chapters.


Here are wikis I wrote on Assembla
https://www.assembla.com/wiki/show/atmr/JUnit
https://www.assembla.com/wiki/show/atmr/jUnit_with_Eclipse

I was looking some videos for Unit Testings. Found some videos on YouTube might be helpful to start with if someone is interested.

http://www.youtube.com/watch?v=pWGf-tly_JY
http://www.youtube.com/watch?v=Chb8IWZeqp4&feature=related


Right at this point I am focusing much on "General Testing" as the 1st deliverable project demo is due on April 29, 2009.
Some catches:
Our code was working separately with 1 finger and 2 finger and need to be working with whatever finger we want to track. So we are going to add some settings in our UI.

I was also testing videos to find out whether different frame rate or different resolutions has impact on finger tracking and finding centroid. Seems like No.
I tested 7FPS,24FPS and tested resolution 300*400 (my camera) and 840*648 (camera from library). Getting the same effect and could track the centroid.

Will be doing more tests.

Automatic Analysis of finger movements during tactile reading (Braille and Tactile pictures).

I found a very important paper on "Computer-based automatic finger- and speech tracking system". The claim this work is the "first technology ever for online registration and interactive and automatic analysis of finger movements during tactile reading (Braille and tactile pictures)".

The system look like below.



Please find the paper here.
https://www.assembla.com/spaces/atmr/documents/afEZtykF8r3PXJeJe5afGb/download/finger_speech.pdf

Amy's Presentation - Q&A

I was much interested to listen to someone who is involved with real research on some topic (blind users' map reading) that we are trying to automate especially the analysis part. I was happy to take notes on that class. Find the cream part of the Questions and Answers of her presentation here.
https://www.assembla.com/wiki/show/atmr/Minutes_15-Apr-2009

I was wondering if she ever tried doing video of the experiment she runs. The answer is no. She might want to do that right now if we impress her :)

Coordinates and Resolutions

I had a small discussion with Michal how to match the coordinate system of Tablet team with visual team? I think we can scale down to one another.

During our one meeting Peter, Fan and Me were talking about the effect of camera setup on the coordinate system video. Do we need to crop or rectification. This might be the point where we might need some research, right now we are thinking the video setup in a way so that we get the map fits to the whole screen. So it makes life easier. If we can't do consider those map rotation things this term then this might be a good think for future developers for the next term. :)

Algorithm Research

We were thinking whether we should use any advanced Algorithm to extract the centroid ,something like K-Mean (used for clustering) and Kristy came up with another idea to remove the requirement of putting colors on fingers. The idea is we can use a built in tool (algorithm implementation, named Snake Algorithm :) that can capture the whole object if we can point any part of it. If we can somehow detect any part of hand , the tool would gradually grow to grab the whole object. This might be good thing to do to remove the constraints of putting colors on fingers.

jDom in Use

In Fall 2008 for "Software Methodology I" class I used jDOM for XML use in Java. JDom is configured well on top of DOM and SAX for much flexible use in java for XML file manipulation. Fan used the jDOM library for XML file output in our video processing output file.

Idea implementation using ImageJ

Offline videos can be uploaded to ImageJ stacks and we can get stack of frames. Then we can extract the information from single images say by comparing pixels' color. This is the same idea I expressed in the previous post. Thanks to kyle for implementing this with ImageJ. I had the same implementation in processing 1.0. Because we feel for future if our code is going to be used for more information extraction then ImageJ might be a good idea to use over processing so we are focusing on ImageJ.

Friday, April 10, 2009

Possible simpliest way to extract information from video

Pipeline :

----------------------------------------------------------------

Capture Video -> Capture Frame -> Extract Finger ->Point Stream

----------------------------------------------------------------

Assuming point stream would contain - point(x,y) for the centroid of fingers, timestamp


Procedure :

1. Using Processing Api it is possible to load pixels of the frame.

2. After we get the pixels we look for the RGB component of the pixel color. If we know the finger has a blue patch on it, we compare the blue(B) component of the pixels with a threshold say 200 (0 is min , 255 is max) and get those pixels' coordinates those have blue component above the threshold 200, indicating the blue points of the finger that we are interested in.

3. To get the centroid of these points we can simply get the average of the points vector.

4. To get the time stamp if the frames are stored sequentially and we know the frame rate , we can know(or put) the time gap between each two frames and therewith set a time stamp on each frame.

Note : As we would go forward we might need to consider some other factors and adjust the plan of retrieving information from video.

Demo on Processing 1.0

I showed the demo on "How processing can be a useful tool for real time video processing"

1. I showed it was possible to capture real time videos from the "Logitech Web Cam"
2. And then saving frames from videos.

3. I then showed(mostly described my idea) how a "finger with blue patch on it" can be extracted from a frame.

My explanation why processing can be a good choice

This post comes from my reply on Michal's query

https://www.assembla.com/flows/show/bvDxR8itur3OtTeJe5afGb

For Video Processing:
I looked for whether Processing 1.0 can be used for Video processing. Using this API we would be able to capture videos in real time (offline capturing can be done as well with the loadvideo() method of this library) and I could wrote sample codes for capturing videos in real time using this API. I could also extract frames from the captured videos using this API. Images of .gif , .jpg , .png , .tga formats can be processed with this API. After extracting the frames I could also load pixels of the images in pixels[] array. I could even crop frames using this API. Some other features like Frame difference (to know changes between two frames) and background removal can also be done using Processing API which we might use to retrieve the finger positions and other information. I showed the demo to our teammates and now we all are looking to the features of this API. This can be presented on next Monday’s class as a proof of concept

One more thing is that this api support video capturing using webcam which would be of very little cost. I used one Logitech WebCam of cost 25$

Benefits of using this API
1. Java based
2. Can be used in Eclipse
3. Well documented
4. Less Codes
5. Rich Library for Video Processing

Demerits:
1. Might not be able to run in Linux as this API need Apple’s Quick Time installed which is not yet released for Linux.

Alternatives of Processing 1.0
Peter is now looking into Processing 1.0 to compare the ImageJ API with it. We would soon figure out but as from our meeting on last Friday we are impressed with Processing 1.0

For Signal Processing :

I did not look much for signal processing but while looking for video processing , I found some guys wrote signal processing applications using Processing 1.0 API as well and those are well documented too (links are given in the first post of this message). I believe if we can be consistent for choosing tools for both Visual and Signal Processing Team using Processing 1.0 rather using MatLab or some other signal processing tools then it would be helpful when we would collaborate. I think Signal Processing Team can try writing some sample codes and look whether this API can be used for signal processing or not.

Note: It would be worth looking into the features of this API to get a glimpse of it to feel whether it would be useful or not. I might be wrong. All of our Visual Team members are looking at it. Anyone else interested can look into the wikis with some external links that I wrote here :
https://www.assembla.com/wiki/show/atmr/All_About_Processing_1-0

Processing 1.0 can be used for Signal Processing as well?

Our video processing team is planning to use Processing 1.0 for video processing(real time). At some point video processing team and signal processing team would need to collaborate. I found that Processing 1.0 might be used even for signal processing, not sure though.

I found some links , which would be worth checking and I emailed Signal Processing Team.

http://cnx.org/content/m13045/latest/
http://cnx.org/content/m13046/latest/
http://cnx.org/content/m10087/latest/
http://cnx.org/content/m13047/latest/

I believe using same tools for both video and signal processing would help when both signal and video team collaborate.

Talked to Peter about the decision of using API

Peter and me talked about "which api to use?"

Right now we do not know for sure how complex analysis we might need and do not know the effects of resolution. And we think both processing 1.0 and imageJ can be added to project and we can use whichever fits our requirement as we go with the project.

Processing 1.0 VS ImageJ

Research from our Visual Team makes us believe that,

For real time video capturing and image processing (if complex analysis not required) processing 1.0 can be a quicker and simple solution to extract very basic information from video and frames.

For complex analysis ImageJ seems ultimate solution. ImageJ is not capable of real time video capturing but strong tool for offline use.

Rectification - What does Michal meant ? Got Answer

In our first weekly meeting we also got confused with the word "Rectification"

Peter emailed Michal, Michal and Daniel replied and the summery is , if I understood..

"Remove as many things required to retrieve all the information we need to extract from the Frames, say finger tips (points, centroid, time-stamp)"

And would be better not to emphasize on the word "Rectification" rather retrieve required information correctly.

Here is the Message on Rectification and reply from "Michal"

Our First Visual Team meeting

We did couple of things on our first meeting. I brought my video camera and the tactile map to have some sample videos.

1. Did video on the Tactile map using blind folded person
Videos on YouTube

2. Made a plan on weekly basis

3. Distributed tasks

4. Talked about basic ideas of doing the image processing as thresholding, rectification, use of k-mean algorithm, etc.

Friday, April 3, 2009

Point Stream ??

What should be the point stream ? An 2D array for point coordinates p(x,y) ? The image rectification says it converts 2D space to 1D space and if we want rectification then what we are gonna get?

Well again there is built in loadpixels() function in Video library in processing 1.0 which can load(save) pixels into video_frame.pixels[] array and its one dimensional. Seems like video library of processing 1.0 automatically rectifies before loading pixels as pixels[] is an one dimensional array.

So what we should have in our point stream ? 1D pixels ? or 2D points?

Rectification confusion again

After some googling on Image rectification, I find this

Why Rectification?
To change a 2-D search problem in general into a 1-D search problem

How Rectification?
Basic idea: The rectified images can be thought of as acquired by a new stereo rig, obtained by rotating the original cameras around their optical centers.


Exactly what Micheal means with Image Rectification?

Thursday, April 2, 2009

I could install and write the sample codes using processing 1.0

Seems processing has very rich library video processing. And definitely we would need to write small pieces of code like python. It uses mostly java with slight modification that makes life even easier.

Wednesday, April 1, 2009

My confusion regarding rectifying Map Frames

I was wondering how to relate frames captured from Tablets and Video cameras for rectification, Michael said rectification can be done linearly with the finger tips' coordinates. But actually we wont relate or compare frames from Tablets with those of Video cameras rather we would rectify those separately. It my misinterpretation from the pipeline-sketch. Now I got it !!! We are gonna do the capturing separately. The rectification now makes sense.

Braille Reading paper from Michael

These guys used pen attached to the dominant finger of the blind person a little bit above of the blind participant's finger tip and by this way they could capture the finger trace. And one thing was interesting that they gave freedom to the participants to read or scan on their own way and this is a good lesson which we might use to get a natural finger movements of the participants. So I still believe using eye folded persons instead of real blind subjects would make difference in analyzing finger tracing behavior and letting the participants use much freedom for their finger movement would bring much real observation.

My Plan on our first Visual Team Meeting

I got one tactile map and I think we can start capturing some sample videos on our first meeting while one of us can try to act as an unsighted person. We can then make some still frames or even analyze how we can really apply the video processing. I can also bring my video camera until we get a working one with us from the media center.

Our lead Peter thinks we can try this. Lets see what we can get.

I got one tactile map from Xiangkui

I wish to bring this for the Monday's class. Using this kind of map visually impaired persons can touch the paper and feel the high ink to realize the objects on the paper. This seems very useful for unsighted people. And until I really saw it I didn't realized much how this map can be effective for blind users' navigation.

My task for Monday 04/06/2009

I would need to provide a proof of concept that processing 1.0 can be used for finger tracking. This is a java based language and seems like even we would need to use less codes than java and kinda similar like python.

I found some people talking that they could able to use processing for face detection and I hope I would find some way to use it for finger tracing.

Processing 1.0

MIT Media Lab has created one language for processing image and interactions. I believe we can use this Processing 1.0 for video capturing thing.

Link :
http://processing.org/