While I was showing our first release to Michal, he said it might be a good thing to do if we can do batch processing of video files rather than one each time. To analyze data it is important to have data ready. It make sense to have more videos producing data quick than how interactively we process one video file.
Good though for rest of the term.
Wednesday, April 29, 2009
FingerTracker_v0.1 - First Release from Visual Team
Tester is someone who is not liked by teammates of a software project. Why ? :)
I was looking at the codes , docs , builds and bugging my teammates to change stuffs after each catch to make a clean release. I won't like someone bugging me. Neither my teammates.
I wrote some unit tests as well and I got one Test failed. Later we fixed that method. Felt unit tests helps but it is much frustrating to find out which test cases to write. It takes quite a big time as well.
Our release was 93MB and after zipped it was 29MB.
Eclipse is nice to make a Runnable jar and I could make it with in a minutes as there is a quick tool to do it with eclipse.
Here is the first release from our visual team.
At the end I must say "we have a really good team"
I was looking at the codes , docs , builds and bugging my teammates to change stuffs after each catch to make a clean release. I won't like someone bugging me. Neither my teammates.
I wrote some unit tests as well and I got one Test failed. Later we fixed that method. Felt unit tests helps but it is much frustrating to find out which test cases to write. It takes quite a big time as well.
Our release was 93MB and after zipped it was 29MB.
Eclipse is nice to make a Runnable jar and I could make it with in a minutes as there is a quick tool to do it with eclipse.
Here is the first release from our visual team.
At the end I must say "we have a really good team"
Tuesday, April 28, 2009
Tech Doc and Read Me
Monday, April 27, 2009
What I am doing?
I am going to help Fan with documentation and polishing for the first demo and would do testing trying to crush the program :))
Thoughts for flexible UI control Panel
Right now we are having settings in our control panel where user need to put the finger ID with what color to compare with and working with tracing 3 colors(or tree fingers). I am feeling there is a problem with this,
This one is not flexible and user need to think about what color to track with what finger.
I am thinking of making something like dynamic UI. Say when user open the video, we can have the first frame printed and let the users click on which finger s/he wants to track and Done.
How to do it?
Use mouseDown event and get the coordinate of the pixel of the finger tip, get the colors of that pixel and that is the finger tip we are interested with. Now trace all the other frames in stack.
This should be cool to do it. I am going to work on it later this term.
This one is not flexible and user need to think about what color to track with what finger.
I am thinking of making something like dynamic UI. Say when user open the video, we can have the first frame printed and let the users click on which finger s/he wants to track and Done.
How to do it?
Use mouseDown event and get the coordinate of the pixel of the finger tip, get the colors of that pixel and that is the finger tip we are interested with. Now trace all the other frames in stack.
This should be cool to do it. I am going to work on it later this term.
Video Session with HD cemera and proper settings
I could bring the video camera with tripod from Knight library. Me , Fan and Peter worked with the video session.

We set up the camera from above and covering the whole map with the full screen of the camera.

We marked the points of tripod legs and took some measurements. Note downed the resolutions of the Camera videos, file formats, etc.
We set up the camera from above and covering the whole map with the full screen of the camera.
We marked the points of tripod legs and took some measurements. Note downed the resolutions of the Camera videos, file formats, etc.
Video File Uploading
We were guessing that we might need much bigger space than what assembla.com provides to upload our videos.
I found www.esnips.com This site gives 5GB free space.
Solved our problem ?
I found www.esnips.com This site gives 5GB free space.
Solved our problem ?
Code and Test Documentation
Michal gave a presentation on "Documentation" and at some point Abdul was asking what to put for "code documentation" and I had a similar question in mind for "Test Documentation."
I found this document for "Software Documentation" which gives a general idea what to put for "code and test" documentation.
Code Documentation (CD)
You are expected to fully document your code. Every class and class method should have a name, a brief one-line description, and a detailed description of the algorithm. All methods also require descriptions of all inputs and outputs. If applicable, you should also note any caveats – things that could go wrong or things that the code doesn’t address. Put assumptions in the caveats section. If you are coding in Java, you should use the documentation tags that work with the javadoc utility. This utility automatically generates web pages for your documentation. To make things consistent, simply cut and paste the textual descriptions of your classes, objects, and methods from your OOD directly into the code. Then let javadoc do the dirty work. If you are not coding in Java, you can still use the same tags and see if javadoc operates on your source files. Otherwise, you could write such a utility yourself!
Testing Documentation (TD)
The TD describes how you tested your program to prove that it works sucessfully. You should include testbeds for both the user interface and application aspects of your program. You should also provide test data sets where applicable. For example, if you program does some form of text processing, you should provide example file inputs that test specific features of the program requirements. You should pay special attention to borderline values and bogus user input. The TD should also include evaluation criteria, so you know what you are actually testing for.
http://www.assembla.com/spaces/atmr/documents/aZ9_Eul38r3PpneJe5afGb/download/software_documentation.pdf
I found this document for "Software Documentation" which gives a general idea what to put for "code and test" documentation.
Code Documentation (CD)
You are expected to fully document your code. Every class and class method should have a name, a brief one-line description, and a detailed description of the algorithm. All methods also require descriptions of all inputs and outputs. If applicable, you should also note any caveats – things that could go wrong or things that the code doesn’t address. Put assumptions in the caveats section. If you are coding in Java, you should use the documentation tags that work with the javadoc utility. This utility automatically generates web pages for your documentation. To make things consistent, simply cut and paste the textual descriptions of your classes, objects, and methods from your OOD directly into the code. Then let javadoc do the dirty work. If you are not coding in Java, you can still use the same tags and see if javadoc operates on your source files. Otherwise, you could write such a utility yourself!
Testing Documentation (TD)
The TD describes how you tested your program to prove that it works sucessfully. You should include testbeds for both the user interface and application aspects of your program. You should also provide test data sets where applicable. For example, if you program does some form of text processing, you should provide example file inputs that test specific features of the program requirements. You should pay special attention to borderline values and bogus user input. The TD should also include evaluation criteria, so you know what you are actually testing for.
http://www.assembla.com/spaces/atmr/documents/aZ9_Eul38r3PpneJe5afGb/download/software_documentation.pdf
My role in the team right now
I have been assigned as General/Unit Tester and side Coder.
I did setup jUnit4 integrated with our project. Wrote some unit Test cases and those test cases passed.
Thanks Michal for lending his co-authored book "Software Testing and Analysis". I wanted to apply some professionalism as a Tester. I could just make a look into the chapters' contents and liked the "Problems and Methods" portion the book especially "Testing Object Oriented Software" and "Test Execution" chapters.
Here are wikis I wrote on Assembla
https://www.assembla.com/wiki/show/atmr/JUnit
https://www.assembla.com/wiki/show/atmr/jUnit_with_Eclipse
I was looking some videos for Unit Testings. Found some videos on YouTube might be helpful to start with if someone is interested.
http://www.youtube.com/watch?v=pWGf-tly_JY
http://www.youtube.com/watch?v=Chb8IWZeqp4&feature=related
Right at this point I am focusing much on "General Testing" as the 1st deliverable project demo is due on April 29, 2009.
Some catches:
Our code was working separately with 1 finger and 2 finger and need to be working with whatever finger we want to track. So we are going to add some settings in our UI.
I was also testing videos to find out whether different frame rate or different resolutions has impact on finger tracking and finding centroid. Seems like No.
I tested 7FPS,24FPS and tested resolution 300*400 (my camera) and 840*648 (camera from library). Getting the same effect and could track the centroid.
Will be doing more tests.
I did setup jUnit4 integrated with our project. Wrote some unit Test cases and those test cases passed.
Thanks Michal for lending his co-authored book "Software Testing and Analysis". I wanted to apply some professionalism as a Tester. I could just make a look into the chapters' contents and liked the "Problems and Methods" portion the book especially "Testing Object Oriented Software" and "Test Execution" chapters.
Here are wikis I wrote on Assembla
https://www.assembla.com/wiki/show/atmr/JUnit
https://www.assembla.com/wiki/show/atmr/jUnit_with_Eclipse
I was looking some videos for Unit Testings. Found some videos on YouTube might be helpful to start with if someone is interested.
http://www.youtube.com/watch?v=pWGf-tly_JY
http://www.youtube.com/watch?v=Chb8IWZeqp4&feature=related
Right at this point I am focusing much on "General Testing" as the 1st deliverable project demo is due on April 29, 2009.
Some catches:
Our code was working separately with 1 finger and 2 finger and need to be working with whatever finger we want to track. So we are going to add some settings in our UI.
I was also testing videos to find out whether different frame rate or different resolutions has impact on finger tracking and finding centroid. Seems like No.
I tested 7FPS,24FPS and tested resolution 300*400 (my camera) and 840*648 (camera from library). Getting the same effect and could track the centroid.
Will be doing more tests.
Automatic Analysis of finger movements during tactile reading (Braille and Tactile pictures).
I found a very important paper on "Computer-based automatic finger- and speech tracking system". The claim this work is the "first technology ever for online registration and interactive and automatic analysis of finger movements during tactile reading (Braille and tactile pictures)".
The system look like below.

Please find the paper here.
https://www.assembla.com/spaces/atmr/documents/afEZtykF8r3PXJeJe5afGb/download/finger_speech.pdf
The system look like below.

Please find the paper here.
https://www.assembla.com/spaces/atmr/documents/afEZtykF8r3PXJeJe5afGb/download/finger_speech.pdf
Amy's Presentation - Q&A
I was much interested to listen to someone who is involved with real research on some topic (blind users' map reading) that we are trying to automate especially the analysis part. I was happy to take notes on that class. Find the cream part of the Questions and Answers of her presentation here.
https://www.assembla.com/wiki/show/atmr/Minutes_15-Apr-2009
I was wondering if she ever tried doing video of the experiment she runs. The answer is no. She might want to do that right now if we impress her :)
https://www.assembla.com/wiki/show/atmr/Minutes_15-Apr-2009
I was wondering if she ever tried doing video of the experiment she runs. The answer is no. She might want to do that right now if we impress her :)
Coordinates and Resolutions
I had a small discussion with Michal how to match the coordinate system of Tablet team with visual team? I think we can scale down to one another.
During our one meeting Peter, Fan and Me were talking about the effect of camera setup on the coordinate system video. Do we need to crop or rectification. This might be the point where we might need some research, right now we are thinking the video setup in a way so that we get the map fits to the whole screen. So it makes life easier. If we can't do consider those map rotation things this term then this might be a good think for future developers for the next term. :)
During our one meeting Peter, Fan and Me were talking about the effect of camera setup on the coordinate system video. Do we need to crop or rectification. This might be the point where we might need some research, right now we are thinking the video setup in a way so that we get the map fits to the whole screen. So it makes life easier. If we can't do consider those map rotation things this term then this might be a good think for future developers for the next term. :)
Algorithm Research
We were thinking whether we should use any advanced Algorithm to extract the centroid ,something like K-Mean (used for clustering) and Kristy came up with another idea to remove the requirement of putting colors on fingers. The idea is we can use a built in tool (algorithm implementation, named Snake Algorithm :) that can capture the whole object if we can point any part of it. If we can somehow detect any part of hand , the tool would gradually grow to grab the whole object. This might be good thing to do to remove the constraints of putting colors on fingers.
jDom in Use
In Fall 2008 for "Software Methodology I" class I used jDOM for XML use in Java. JDom is configured well on top of DOM and SAX for much flexible use in java for XML file manipulation. Fan used the jDOM library for XML file output in our video processing output file.
Idea implementation using ImageJ
Offline videos can be uploaded to ImageJ stacks and we can get stack of frames. Then we can extract the information from single images say by comparing pixels' color. This is the same idea I expressed in the previous post. Thanks to kyle for implementing this with ImageJ. I had the same implementation in processing 1.0. Because we feel for future if our code is going to be used for more information extraction then ImageJ might be a good idea to use over processing so we are focusing on ImageJ.
Friday, April 10, 2009
Possible simpliest way to extract information from video
Pipeline :
----------------------------------------------------------------
Capture Video -> Capture Frame -> Extract Finger ->Point Stream
----------------------------------------------------------------
Assuming point stream would contain - point(x,y) for the centroid of fingers, timestamp
Procedure :
1. Using Processing Api it is possible to load pixels of the frame.
2. After we get the pixels we look for the RGB component of the pixel color. If we know the finger has a blue patch on it, we compare the blue(B) component of the pixels with a threshold say 200 (0 is min , 255 is max) and get those pixels' coordinates those have blue component above the threshold 200, indicating the blue points of the finger that we are interested in.
3. To get the centroid of these points we can simply get the average of the points vector.
4. To get the time stamp if the frames are stored sequentially and we know the frame rate , we can know(or put) the time gap between each two frames and therewith set a time stamp on each frame.
Note : As we would go forward we might need to consider some other factors and adjust the plan of retrieving information from video.
----------------------------------------------------------------
Capture Video -> Capture Frame -> Extract Finger ->Point Stream
----------------------------------------------------------------
Assuming point stream would contain - point(x,y) for the centroid of fingers, timestamp
Procedure :
1. Using Processing Api it is possible to load pixels of the frame.
2. After we get the pixels we look for the RGB component of the pixel color. If we know the finger has a blue patch on it, we compare the blue(B) component of the pixels with a threshold say 200 (0 is min , 255 is max) and get those pixels' coordinates those have blue component above the threshold 200, indicating the blue points of the finger that we are interested in.
3. To get the centroid of these points we can simply get the average of the points
4. To get the time stamp if the frames are stored sequentially and we know the frame rate , we can know(or put) the time gap between each two frames and therewith set a time stamp on each frame.
Note : As we would go forward we might need to consider some other factors and adjust the plan of retrieving information from video.
Demo on Processing 1.0
I showed the demo on "How processing can be a useful tool for real time video processing"
1. I showed it was possible to capture real time videos from the "Logitech Web Cam"
2. And then saving frames from videos.
3. I then showed(mostly described my idea) how a "finger with blue patch on it" can be extracted from a frame.
1. I showed it was possible to capture real time videos from the "Logitech Web Cam"
2. And then saving frames from videos.
3. I then showed(mostly described my idea) how a "finger with blue patch on it" can be extracted from a frame.
My explanation why processing can be a good choice
This post comes from my reply on Michal's query
https://www.assembla.com/flows/show/bvDxR8itur3OtTeJe5afGb
For Video Processing:
I looked for whether Processing 1.0 can be used for Video processing. Using this API we would be able to capture videos in real time (offline capturing can be done as well with the loadvideo() method of this library) and I could wrote sample codes for capturing videos in real time using this API. I could also extract frames from the captured videos using this API. Images of .gif , .jpg , .png , .tga formats can be processed with this API. After extracting the frames I could also load pixels of the images in pixels[] array. I could even crop frames using this API. Some other features like Frame difference (to know changes between two frames) and background removal can also be done using Processing API which we might use to retrieve the finger positions and other information. I showed the demo to our teammates and now we all are looking to the features of this API. This can be presented on next Monday’s class as a proof of concept
One more thing is that this api support video capturing using webcam which would be of very little cost. I used one Logitech WebCam of cost 25$
Benefits of using this API
1. Java based
2. Can be used in Eclipse
3. Well documented
4. Less Codes
5. Rich Library for Video Processing
Demerits:
1. Might not be able to run in Linux as this API need Apple’s Quick Time installed which is not yet released for Linux.
Alternatives of Processing 1.0
Peter is now looking into Processing 1.0 to compare the ImageJ API with it. We would soon figure out but as from our meeting on last Friday we are impressed with Processing 1.0
For Signal Processing :
I did not look much for signal processing but while looking for video processing , I found some guys wrote signal processing applications using Processing 1.0 API as well and those are well documented too (links are given in the first post of this message). I believe if we can be consistent for choosing tools for both Visual and Signal Processing Team using Processing 1.0 rather using MatLab or some other signal processing tools then it would be helpful when we would collaborate. I think Signal Processing Team can try writing some sample codes and look whether this API can be used for signal processing or not.
Note: It would be worth looking into the features of this API to get a glimpse of it to feel whether it would be useful or not. I might be wrong. All of our Visual Team members are looking at it. Anyone else interested can look into the wikis with some external links that I wrote here :
https://www.assembla.com/wiki/show/atmr/All_About_Processing_1-0
https://www.assembla.com/flows/show/bvDxR8itur3OtTeJe5afGb
For Video Processing:
I looked for whether Processing 1.0 can be used for Video processing. Using this API we would be able to capture videos in real time (offline capturing can be done as well with the loadvideo() method of this library) and I could wrote sample codes for capturing videos in real time using this API. I could also extract frames from the captured videos using this API. Images of .gif , .jpg , .png , .tga formats can be processed with this API. After extracting the frames I could also load pixels of the images in pixels[] array. I could even crop frames using this API. Some other features like Frame difference (to know changes between two frames) and background removal can also be done using Processing API which we might use to retrieve the finger positions and other information. I showed the demo to our teammates and now we all are looking to the features of this API. This can be presented on next Monday’s class as a proof of concept
One more thing is that this api support video capturing using webcam which would be of very little cost. I used one Logitech WebCam of cost 25$
Benefits of using this API
1. Java based
2. Can be used in Eclipse
3. Well documented
4. Less Codes
5. Rich Library for Video Processing
Demerits:
1. Might not be able to run in Linux as this API need Apple’s Quick Time installed which is not yet released for Linux.
Alternatives of Processing 1.0
Peter is now looking into Processing 1.0 to compare the ImageJ API with it. We would soon figure out but as from our meeting on last Friday we are impressed with Processing 1.0
For Signal Processing :
I did not look much for signal processing but while looking for video processing , I found some guys wrote signal processing applications using Processing 1.0 API as well and those are well documented too (links are given in the first post of this message). I believe if we can be consistent for choosing tools for both Visual and Signal Processing Team using Processing 1.0 rather using MatLab or some other signal processing tools then it would be helpful when we would collaborate. I think Signal Processing Team can try writing some sample codes and look whether this API can be used for signal processing or not.
Note: It would be worth looking into the features of this API to get a glimpse of it to feel whether it would be useful or not. I might be wrong. All of our Visual Team members are looking at it. Anyone else interested can look into the wikis with some external links that I wrote here :
https://www.assembla.com/wiki/show/atmr/All_About_Processing_1-0
Processing 1.0 can be used for Signal Processing as well?
Our video processing team is planning to use Processing 1.0 for video processing(real time). At some point video processing team and signal processing team would need to collaborate. I found that Processing 1.0 might be used even for signal processing, not sure though.
I found some links , which would be worth checking and I emailed Signal Processing Team.
http://cnx.org/content/m13045/latest/
http://cnx.org/content/m13046/latest/
http://cnx.org/content/m10087/latest/
http://cnx.org/content/m13047/latest/
I believe using same tools for both video and signal processing would help when both signal and video team collaborate.
I found some links , which would be worth checking and I emailed Signal Processing Team.
http://cnx.org/content/m13045/latest/
http://cnx.org/content/m13046/latest/
http://cnx.org/content/m10087/latest/
http://cnx.org/content/m13047/latest/
I believe using same tools for both video and signal processing would help when both signal and video team collaborate.
Talked to Peter about the decision of using API
Peter and me talked about "which api to use?"
Right now we do not know for sure how complex analysis we might need and do not know the effects of resolution. And we think both processing 1.0 and imageJ can be added to project and we can use whichever fits our requirement as we go with the project.
Right now we do not know for sure how complex analysis we might need and do not know the effects of resolution. And we think both processing 1.0 and imageJ can be added to project and we can use whichever fits our requirement as we go with the project.
Processing 1.0 VS ImageJ
Research from our Visual Team makes us believe that,
For real time video capturing and image processing (if complex analysis not required) processing 1.0 can be a quicker and simple solution to extract very basic information from video and frames.
For complex analysis ImageJ seems ultimate solution. ImageJ is not capable of real time video capturing but strong tool for offline use.
For real time video capturing and image processing (if complex analysis not required) processing 1.0 can be a quicker and simple solution to extract very basic information from video and frames.
For complex analysis ImageJ seems ultimate solution. ImageJ is not capable of real time video capturing but strong tool for offline use.
Rectification - What does Michal meant ? Got Answer
In our first weekly meeting we also got confused with the word "Rectification"
Peter emailed Michal, Michal and Daniel replied and the summery is , if I understood..
"Remove as many things required to retrieve all the information we need to extract from the Frames, say finger tips (points, centroid, time-stamp)"
And would be better not to emphasize on the word "Rectification" rather retrieve required information correctly.
Here is the Message on Rectification and reply from "Michal"
Peter emailed Michal, Michal and Daniel replied and the summery is , if I understood..
"Remove as many things required to retrieve all the information we need to extract from the Frames, say finger tips (points, centroid, time-stamp)"
And would be better not to emphasize on the word "Rectification" rather retrieve required information correctly.
Here is the Message on Rectification and reply from "Michal"
Our First Visual Team meeting
We did couple of things on our first meeting. I brought my video camera and the tactile map to have some sample videos.
1. Did video on the Tactile map using blind folded person
Videos on YouTube
2. Made a plan on weekly basis
3. Distributed tasks
4. Talked about basic ideas of doing the image processing as thresholding, rectification, use of k-mean algorithm, etc.
1. Did video on the Tactile map using blind folded person
Videos on YouTube
2. Made a plan on weekly basis
3. Distributed tasks
4. Talked about basic ideas of doing the image processing as thresholding, rectification, use of k-mean algorithm, etc.
Friday, April 3, 2009
Point Stream ??
What should be the point stream ? An 2D array for point coordinates p(x,y) ? The image rectification says it converts 2D space to 1D space and if we want rectification then what we are gonna get?
Well again there is built in loadpixels() function in Video library in processing 1.0 which can load(save) pixels into video_frame.pixels[] array and its one dimensional. Seems like video library of processing 1.0 automatically rectifies before loading pixels as pixels[] is an one dimensional array.
So what we should have in our point stream ? 1D pixels ? or 2D points?
Well again there is built in loadpixels() function in Video library in processing 1.0 which can load(save) pixels into video_frame.pixels[] array and its one dimensional. Seems like video library of processing 1.0 automatically rectifies before loading pixels as pixels[] is an one dimensional array.
So what we should have in our point stream ? 1D pixels ? or 2D points?
Rectification confusion again
After some googling on Image rectification, I find this
Why Rectification?
To change a 2-D search problem in general into a 1-D search problem
How Rectification?
Basic idea: The rectified images can be thought of as acquired by a new stereo rig, obtained by rotating the original cameras around their optical centers.
Exactly what Micheal means with Image Rectification?
Why Rectification?
To change a 2-D search problem in general into a 1-D search problem
How Rectification?
Basic idea: The rectified images can be thought of as acquired by a new stereo rig, obtained by rotating the original cameras around their optical centers.
Exactly what Micheal means with Image Rectification?
Thursday, April 2, 2009
I could install and write the sample codes using processing 1.0
Seems processing has very rich library video processing. And definitely we would need to write small pieces of code like python. It uses mostly java with slight modification that makes life even easier.
Wednesday, April 1, 2009
My confusion regarding rectifying Map Frames
I was wondering how to relate frames captured from Tablets and Video cameras for rectification, Michael said rectification can be done linearly with the finger tips' coordinates. But actually we wont relate or compare frames from Tablets with those of Video cameras rather we would rectify those separately. It my misinterpretation from the pipeline-sketch. Now I got it !!! We are gonna do the capturing separately. The rectification now makes sense.
Braille Reading paper from Michael
These guys used pen attached to the dominant finger of the blind person a little bit above of the blind participant's finger tip and by this way they could capture the finger trace. And one thing was interesting that they gave freedom to the participants to read or scan on their own way and this is a good lesson which we might use to get a natural finger movements of the participants. So I still believe using eye folded persons instead of real blind subjects would make difference in analyzing finger tracing behavior and letting the participants use much freedom for their finger movement would bring much real observation.
My Plan on our first Visual Team Meeting
I got one tactile map and I think we can start capturing some sample videos on our first meeting while one of us can try to act as an unsighted person. We can then make some still frames or even analyze how we can really apply the video processing. I can also bring my video camera until we get a working one with us from the media center.
Our lead Peter thinks we can try this. Lets see what we can get.
Our lead Peter thinks we can try this. Lets see what we can get.
I got one tactile map from Xiangkui
I wish to bring this for the Monday's class. Using this kind of map visually impaired persons can touch the paper and feel the high ink to realize the objects on the paper. This seems very useful for unsighted people. And until I really saw it I didn't realized much how this map can be effective for blind users' navigation.
My task for Monday 04/06/2009
I would need to provide a proof of concept that processing 1.0 can be used for finger tracking. This is a java based language and seems like even we would need to use less codes than java and kinda similar like python.
I found some people talking that they could able to use processing for face detection and I hope I would find some way to use it for finger tracing.
I found some people talking that they could able to use processing for face detection and I hope I would find some way to use it for finger tracing.
Processing 1.0
MIT Media Lab has created one language for processing image and interactions. I believe we can use this Processing 1.0 for video capturing thing.
Link :
http://processing.org/
Link :
http://processing.org/
Subscribe to:
Posts (Atom)