Minutes

January 28, 2011

3pm:Group
  • PROJECT STATEMENT (need one…after research for a week)
  • Use of search engines to gain better understanding.
  • Viewing online tutorials on Augmented Reality.
  • Contemplation in the use of GPS and latitude/longitude for creating distance in virtual world.
  • Creation of Yahoo Pipe(modelling) & Layar(virtual mapping) accounts.
  • URL encoded QR codes (qrcode.kaywa.com).
  • Looked at Skaloop, which is another virtual mapping program.
  • Contacted Jordan Perr and Prof. Brian Cussack about guidance for term.

January 31, 2011

12pm:Jordan
  • Discussed methods of creating a user interface where photos could be uploaded and directly linked to the Augmented Reality Wall.
  • Development of web based upload for art, to be made plausibly in the program Django (Jordan is very familiar with this program and can be of aid for this choice).
  • Cooper domain name sought, Jordan proposes using a Cooper server with Shell access via Prof. Cussack in Computer center.
  • The host site to be created must have the capability to create some type of data feed (most likely RSS) from which data can enter the mobile device via internet connection.
3pm:Professor Cusack
  • Use of Glyphs. (How will mobile pick up on meaning of Glyph, must be internet?)
  • ARToolkit free open source code which runs the visual aspect of Glyphs. (only missing drawing to be displayed)–[Must try downloading]
  • Use of Open GL for Graphics to be teamed with ARToolkit.
  • Questions on how to host a full gallery of photos via 1 data feed, whilst only showing one photo at a time.
  • Possible screen full of photos but Glyph only shows one photo at a time, in location and moving about the screen.

February 7, 2011

12pm:Jordan
  • Discussion about Mobile Phone Application: Jordan would like us to meet with the Treasure Hunt group to discuss any problems in coding(cross group initiative).
  • Decided that our Cooper Server should be responsible for artist interface, and the creation of the data feed to be transferred to mobile application via internet networking class in java object oriented coding.
  • Phone application development is responsible for parsing the 3d model that comes in on the data feed and then displaying it in the camera viewer with the help of AndAR open GL code which will manipulated.
  • Teams of specialized study devised (application development = Grant Aarons & John Han) (Server interface for uploading and creation of data feed = Noah Dinkes & Kenny Lehr) [modeling aspect to be created on the server and sent via data feed to be worked on cooperatively]
4pm: Noah

February 12, 2011

February 16, 2011

9pm: Jordan

Server Configuration

  • Decided on Goals for the few weeks ahead
  • Set up short-term Gantt Charts (Scheduling)
  • Resolved some issues with both Django Development and Android Development

Application Development

  • Java build path for the use of .jar
  • Understand the handlers used in java src files
  • Discussed link between .xml files (display format) and Java src files (executing functions)

February 20, 2011

1-3am

Creating a SPLASH screen for the application.

  • This adds to the professional look of the application.
  • Also allows us to have a title slide in an application which should (as designed now) go straight to the camera view and object rendering mode.
  • Great tutorial for Eclipse development and really just getting an understanding for Java is posted in the Background and Resources section.

March 1, 2011

  • Met with Professor Cusack to learn about web server functions and how a basic web server works

March 4, 2011(all Jordan guidance @ this point for the application side)

1-3pm
  • Discussed progress in the title screen in the works
  • Resolved getting the AndAR.jar library onto the build path for our Cooper Application
  • Tried starting up the camera on our application using the implemented AndAR.jar components
  • Found the example AndAR.jar project and got a camera view open in that src code
  • In trying to work the camera handler(opener) into our application and after the title screen, kept getting errors
  • Will work on the camera handler from the AndAR.jar and hope to have up and running by end of weekend.
  • Got the camera orientation to be landscaped.

March 6, 2011

9-10pm
  • Jordan looks at the progress made.
  • Camera hardware is utilizd in application due to manipulation of the Android Manifest(features)
  • The settings key on Android phones is hard coded so that 3 buttons can be pressed that link to descriptions(About us, Purpose, and Development Team info).
  • Found some bugs in the coding, as the camera functionality keeps closing when acessing descriptions hard coded in. Probably due to seperate scripts being called up, looking to fix this with Jordan during the week.
  • Look forward to getting some internet fed data onto phone. Also using different coded markers, and contact info for artists to be retrieved.
  • Productive days work totaled 6+ hours.

March 7, 2011

2:30pm
  • Discussed the progress in application
  • Prof. Lima seemed pleased with process made thus far; before midterm presentation
  • Decided to follow progress with continual video updates

Camera Functionality and Hard Coded Menu Bar|Experiencing & Fixing Bugs

March 20, 2011

8pm
  • Figured out scaling of 3d model within mobile device.
  • Stability of model when scaled depends on the size of marker. [How far the marker will be easily viewable(?) will have to test run. try to make marker as large as possible, maybe even the existing facade can be coded into a marker.]
  • Still must figure out some way of styling the pop up menus, also canceling the menu views on a finger click, also getting data onto the 3d model view(this completes main functionality).

March 22, 2011

5-6pm
  • Spoke with Eric Leong about his android application.
  • Wanted to talk about how his application came out so smooth. He was very helpful in explaining some of his implemented code, the use of a progress wheel for times when the java code in application is searching for data and a specific code for input streaming from the internet.
6-6:30pm
  • Prof. Lent aided in executing a C program in which images that are seen by the in-sight camera on Mac are converted to a sequence of float point values which depict hue (black to white).
  • Tried implementing this code for chosen images of cooper facade (may end up working)?
  • Must test and possibly take more pattern file tries from the street view actually outside.

March 25, 2011

1am
  • Got Cooper ARt to recognize a custom CU logo in black border.
  • Tried running recognition of a color logo inside the black border and that also worked.
  • Seems that the program favors the black box and black and white float point .patt codes. When testing the java Custom Object code only recognized one of the markers and follows the expected favoring of Black and White Patterns.
  • Trying to get the program to recognize portions of the cooper facade alone and without the border. WORKS to some degree. Getting the 3d model to lay on printed out pages of the facade. Tried recognizing the facade through online photos. The quality of 3d modeling on a marker depends heavily on the conditions of pixel recognition (color), which means sunlight dependent. Will try running trials on the actual facade in the coming days to perfect the marker recognition in the absence of pure Black and White, and also without the common black border.
  • I believe at this point that a better quality photo needs to be converted to a marker in .patt code. this means better printer quality and lighting at time of running executable C file to create .patt code file.
  • Will try these and other means to get program to work with a borderless marker that IS the Cooper FACADE.
  • Thinking of taking multiple shots in multiple lightings of same portion of facade to make recognition fall into one of the multiple .patt files for recognition.

Video http://www.youtube.com/watch?v=XqHmLb408CY

March 26, 2011

2-8pm
  • Trials for recognition of marker from across 3rd ave. (unsuccessful)
  • The marker wasnt large enough at first and the black box was unrecognizable from shadow space.
  • White border is necessary and still not sure how marker can be recognized. Could try putting marker in front of camera but in Cooper Square.
  • Going to try translating 3d model behind the custom marker, and getting image to be displayed.

Video http://www.youtube.com/watch?v=QXdYyJzcZBA

April 4, 2011

What has been done in Django in the past few weeks:

  • Tutorial has been completed and understood
  • “Polls” application was edited to become the “Gallery” application
  • Redefined navigation through site
  • Added extra fields for pictures
  • Added support for pictures

April 4, 2011

2pm
  • Test of markers and -Z translation onto Cooper Facade from viewer in Cooper Square.(Can be applied to bike space in front of Foundation building)

http://www.youtube.com/watch?v=Q9UmGvewDMw&NR=1

  • Found the type of Surface imaging we would like to implement on facade.

http://www.youtube.com/watch?v=BIGdEkJ5P-4&feature=mfu_in_order&list=UL

April 15, 2011

  • Working with the Treasure Hunt Group trying to figure out the login interface to be used in Django
  • Attempting to get Uploading on the website working without login to work (Hopefully will have done in the next few days)

April 16, 2011

  • The uploading and login is going to be much more difficult than expected. One issue is that we are working with the newest version of Django, version 1.3. While using the newest model is generally accepted as the best method, the problem is that the tutorials for django that can be found all over the internet are now obselete. They refer to a different version and will not work with the current version. Also, the documentation provided by django on the two topics is not very well written, so it is hard for us as beginners at coding in this environment to work out the way to work with the features. We will keep trying though.

April 16, 2011

  • Trying to decide if it is more beneficial to continue struggling with Django or change to PHP

April 19, 2011

Presentation

  • Made final decision to work with the other Augmented Reality group to make a joint website, written with PHP.

April 19, 2011

  • Have not heard back yet from the Augmented Reality Wall 1 group as to what they need help on with the server yet

April 23, 2011

  • Met with Jordan to discuss the openGL Texture Mapping of an image.
  • No solution was found after joint research and guess work testing.

May 1, 2011

  • Have not heard back yet from the Augmented Reality Wall 1 group as to what they need help on with the server yet

May 2, 2011

  • Android Sub-Group:
    • Unordered List Item Down to the wire, found out how texture mapping could be accomplished on the android phone.
    • Development on the Application side to follow.
  • Server Sub-Group:
    • Still haven't heard back from the Augmented Reality Wall group as to what they need for us to do for their server
    • Began work on a new server for our group due to the lack of communication between the two groups about the server
    • Simple HTML layouts showing proof-of-concept

May 3, 2011

  • Server Sub-Group:
    • finished up the proof-of-concept website at http://eid103ar.cooper.edu
    • includes a bit of php to get the user to login to make it to the gallery portion of the site, but the account is hard-coded into the site.
    • would like to work more with php after this is done to maybe get a better server working
    • it seems like php would have been the better choice in the beginning for our group, in two days, a basic website was made to about the same level as the Django-powered website that took months to work on. The documentation for php is much more useful and more available on the internet as well as help building HTML websites.
Final Debriefing
  • The TEXTURE WORKS!!!!
  • Time ran out on making the app very “pretty”
  • Videos for Final PowerPoint

http://www.youtube.com/watch?v=1tyOwQtC_xQ http://www.youtube.com/watch?v=fxmd6e_TV2I

May 5, 2011

  • Awesomeeeee! Got two different glyphs displaying two different images.
  • EVEN BETTER got two different glyphs detected and imaging simultaneously!!!
  • Need to work on the speed with which the GL draw is running.
  • Mainly have narrowed it down to one of my for() loops in the GL draw for Object. This code seems to e necessary because loads bitmaps by pixel line by line. Can't adjust this properly and looking to make it a more static form of code so it doesnt run every second with the draw and slow the whole program!
  • Would like to get an array of photos to work on one of my markers so that it can start to take on a gallery form and personalized selection atmosphere.
  • Plan on having one glyph across 3rd avenue with translation of image onto mesh of 41 cooper square. The other marker can serve the area straight in front of the building near the front entrance to the lobby!
  • Interfacing needs work, I need to look at more code and build for ease of use among other things.
 
start/classes/principlesofdesign/augmentedrealitywall2/start/minutes.txt · Last modified: 2011/05/04 23:51 by gaarons
 
Except where otherwise noted, content on this wiki is licensed under the following license:CC Attribution-Noncommercial-Share Alike 3.0 Unported
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki