PROJECT STATEMENT (need one…after research for a week)
Use of search engines to gain better understanding.
Viewing online tutorials on Augmented Reality.
Contemplation in the use of GPS and latitude/longitude for creating distance in virtual world.
Creation of Yahoo Pipe(modelling) & Layar(virtual mapping) accounts.
URL encoded QR codes (qrcode.kaywa.com).
Looked at Skaloop, which is another virtual mapping program.
Contacted Jordan Perr and Prof. Brian Cussack about guidance for term.
Discussed methods of creating a user interface where photos could be uploaded and directly linked to the Augmented Reality Wall.
Development of web based upload for art, to be made plausibly in the program Django (Jordan is very familiar with this program and can be of aid for this choice).
Cooper domain name sought, Jordan proposes using a Cooper server with Shell access via Prof. Cussack in Computer center.
The host site to be created must have the capability to create some type of data feed (most likely
RSS) from which data can enter the mobile device via internet connection.
Use of Glyphs. (How will mobile pick up on meaning of Glyph, must be internet?)
ARToolkit free open source code which runs the visual aspect of Glyphs. (only missing drawing to be displayed)–[Must try downloading]
Use of Open GL for Graphics to be teamed with ARToolkit.
Questions on how to host a full gallery of photos via 1 data feed, whilst only showing one photo at a time.
Possible screen full of photos but Glyph only shows one photo at a time, in location and moving about the screen.
Discussion about Mobile Phone Application: Jordan would like us to meet with the Treasure Hunt group to discuss any problems in coding(cross group initiative).
Decided that our Cooper Server should be responsible for artist interface, and the creation of the data feed to be transferred to mobile application via internet networking class in java object oriented coding.
Phone application development is responsible for parsing the 3d model that comes in on the data feed and then displaying it in the camera viewer with the help of AndAR open GL code which will manipulated.
Teams of specialized study devised (application development = Grant Aarons & John Han) (Server interface for uploading and creation of data feed = Noah Dinkes & Kenny Lehr) [modeling aspect to be created on the server and sent via data feed to be worked on cooperatively]
Server Configuration
Application Development
Java build path for the use of .jar
Understand the handlers used in java src files
Discussed link between .xml files (display format) and Java src files (executing functions)
Creating a SPLASH screen for the application.
This adds to the professional look of the application.
Also allows us to have a title slide in an application which should (as designed now) go straight to the camera view and object rendering mode.
Great tutorial for Eclipse development and really just getting an understanding for Java is posted in the Background and Resources section.
Discussed progress in the title screen in the works
Resolved getting the AndAR.jar library onto the build path for our Cooper Application
Tried starting up the camera on our application using the implemented AndAR.jar components
Found the example AndAR.jar project and got a camera view open in that src code
In trying to work the camera handler(opener) into our application and after the title screen, kept getting errors
Will work on the camera handler from the AndAR.jar and hope to have up and running by end of weekend.
Got the camera orientation to be landscaped.
Jordan looks at the progress made.
Camera hardware is utilizd in application due to manipulation of the Android Manifest(features)
The settings key on Android phones is hard coded so that 3 buttons can be pressed that link to descriptions(About us, Purpose, and Development Team info).
Found some bugs in the coding, as the camera functionality keeps closing when acessing descriptions hard coded in. Probably due to seperate scripts being called up, looking to fix this with Jordan during the week.
Look forward to getting some internet fed data onto phone. Also using different coded markers, and contact info for artists to be retrieved.
Productive days work totaled 6+ hours.
Figured out scaling of 3d model within mobile device.
Stability of model when scaled depends on the size of marker. [How far the marker will be easily viewable(?) will have to test run. try to make marker as large as possible, maybe even the existing facade can be coded into a marker.]
Still must figure out some way of styling the pop up menus, also canceling the menu views on a finger click, also getting data onto the 3d model view(this completes main functionality).
Spoke with Eric Leong about his android application.
Wanted to talk about how his application came out so smooth. He was very helpful in explaining some of his implemented code, the use of a progress wheel for times when the java code in application is searching for data and a specific code for input streaming from the internet.
Prof. Lent aided in executing a C program in which images that are seen by the in-sight camera on Mac are converted to a sequence of float point values which depict hue (black to white).
Tried implementing this code for chosen images of cooper facade (may end up working)?
Must test and possibly take more pattern file tries from the street view actually outside.
Got Cooper ARt to recognize a custom CU logo in black border.
Tried running recognition of a color logo inside the black border and that also worked.
Seems that the program favors the black box and black and white float point .patt codes. When testing the java Custom Object code only recognized one of the markers and follows the expected favoring of Black and White Patterns.
Trying to get the program to recognize portions of the cooper facade alone and without the border. WORKS to some degree. Getting the 3d model to lay on printed out pages of the facade. Tried recognizing the facade through online photos. The quality of 3d modeling on a marker depends heavily on the conditions of pixel recognition (color), which means sunlight dependent. Will try running trials on the actual facade in the coming days to perfect the marker recognition in the absence of pure Black and White, and also without the common black border.
I believe at this point that a better quality photo needs to be converted to a marker in .patt code. this means better printer quality and lighting at time of running executable C file to create .patt code file.
Will try these and other means to get program to work with a borderless marker that IS the Cooper FACADE.
Thinking of taking multiple shots in multiple lightings of same portion of facade to make recognition fall into one of the multiple .patt files for recognition.
Video
http://www.youtube.com/watch?v=XqHmLb408CY
Trials for recognition of marker from across 3rd ave. (unsuccessful)
The marker wasnt large enough at first and the black box was unrecognizable from shadow space.
White border is necessary and still not sure how marker can be recognized. Could try putting marker in front of camera but in Cooper Square.
Going to try translating 3d model behind the custom marker, and getting image to be displayed.
Video
http://www.youtube.com/watch?v=QXdYyJzcZBA
What has been done in Django in the past few weeks:
Tutorial has been completed and understood
“Polls” application was edited to become the “Gallery” application
Redefined navigation through site
Added extra fields for pictures
Added support for pictures
Working with the Treasure Hunt Group trying to figure out the login interface to be used in Django
Attempting to get Uploading on the website working without login to work (Hopefully will have done in the next few days)
The uploading and login is going to be much more difficult than expected. One issue is that we are working with the newest version of Django, version 1.3. While using the newest model is generally accepted as the best method, the problem is that the tutorials for django that can be found all over the internet are now obselete. They refer to a different version and will not work with the current version. Also, the documentation provided by django on the two topics is not very well written, so it is hard for us as beginners at coding in this environment to work out the way to work with the features. We will keep trying though.
Trying to decide if it is more beneficial to continue struggling with Django or change to
PHP
Presentation
Made final decision to work with the other Augmented Reality group to make a joint website, written with
PHP.
Android Sub-Group:
Unordered List Item Down to the wire, found out how texture mapping could be accomplished on the android phone.
Development on the Application side to follow.
Server Sub-Group:
Still haven't heard back from the Augmented Reality Wall group as to what they need for us to do for their server
Began work on a new server for our group due to the lack of communication between the two groups about the server
Simple
HTML layouts showing proof-of-concept
Server Sub-Group:
-
includes a bit of php to get the user to login to make it to the gallery portion of the site, but the account is hard-coded into the site.
would like to work more with php after this is done to maybe get a better server working
it seems like php would have been the better choice in the beginning for our group, in two days, a basic website was made to about the same level as the Django-powered website that took months to work on. The documentation for php is much more useful and more available on the internet as well as help building
HTML websites.
Awesomeeeee! Got two different glyphs displaying two different images.
EVEN BETTER got two different glyphs detected and imaging simultaneously!!!
Need to work on the speed with which the GL draw is running.
Mainly have narrowed it down to one of my for() loops in the GL draw for Object. This code seems to e necessary because loads bitmaps by pixel line by line. Can't adjust this properly and looking to make it a more static form of code so it doesnt run every second with the draw and slow the whole program!
Would like to get an array of photos to work on one of my markers so that it can start to take on a gallery form and personalized selection atmosphere.
Plan on having one glyph across 3rd avenue with translation of image onto mesh of 41 cooper square. The other marker can serve the area straight in front of the building near the front entrance to the lobby!
Interfacing needs work, I need to look at more code and build for ease of use among other things.