Last week (3-7 September 2012) a training activity of the EU-funded project V-Must took place in Schmitten, Germany, near the famous Roman Saalburg. The picture below shows the participants in front of the building.
In this V-Must Virtual Heritage school international students, professionals, and young researchers learnt about the new field of Web deployment of VR and AR Applications (“VR/AR Apps”) and related topics on applied CG technologies in the area of virtual and augmented digital heritge. Topics included 3D reconstruction, 3D documentation, and presentation layers. Here, the X3DOM framework as well as the mobile AR system developed at Fraunhofer IGD were introduced as technological tools for processing and integration into Web front-ends, while likewise registering the 3D-scanned objects with the physical world.
The video shown below was produced at Ernani Silva Bruno Primary School, in São Paulo, August, 2012. It shows two moments of an after school interactive media literacy project that has been carried out under the supervision of Jorge Franco.
The project’s goal is to stimulate students from the 4th grade level enhancing technical and cognitive skills as well as learning and applying science concepts from the curriculum through using digital media. Among other activities, the students have been editing X3DOM files. Through that it is expected they learn basics of computer graphics, develop spatial thinking and math skills related to coordinate systems and how to place virtual objects, and enhance reading and writing abilities which are experienced while they are programming and commenting X3D and HTML code.
02.09.2012 Code Technical
We have further improved picking to cope with several problems that came along with the original approach (which was mentioned in a very old post). Therefore, picking now supports 64k different objects, a higher precision pick position and the normal at the picked position (both in world space) for all mouse events.
We still use a single-pass render-buffer-based approach, but instead of rendering the normalized world position into an FBO’s 8-bit RGB channel and the (internal) Shape ID into the (also 8-bit) alpha channel, we now render just the distance of the picked object position to the camera position into the RG channel (encoded as 16-bit value in the shader) and the Shape ID into the texture’s BA channel (also encoded as 16-bit).
Having the distance d between both positions provides enough information to calculate the full 3D position, since the x and y components (along with z later on) can be obtained by computing the view ray through the the picked pixel position (x,y).
var line = viewarea.calcViewRay(x, y); var pickPos = line.pos.add(line.dir.multiply(d));
And instead of just reading back a single (8-bit) RGBA value at the picked pixel position (x,y), we now read back a small 2×2 window, so that we can also directly compute the object’s normal by taking the cross product of the (decoded) world space position above (x,y-1) and to the right (x+1,y).
This way, the corresponding UI Event object now not only provides the picked world position (worldX, worldY, and worldZ), but also normalX, normalY, and normalZ.
This year’s Web3D conference, which was held in cooperation with ACM Siggraph, has shown that the most common development platforms in 3D Web research are of course WebGL and – most interestingly 🙂 – X3DOM. Please check out the whole technical program if you like to learn more. A few impressions showing the conference’s opening session (top photo) as well as a panel session (bottom) can be seen below.