Comunication

General technological framework of the DynaMus

The scope of this work was to develop a fully dynamic interactive virtual environment as an open technological framework to allow the easy creation of virtual museums and exhibitions using distributed Web content, without the need of any predefined scenarios. In order to be able to implement such a system, open data technologies had to be integrated in order to be available through the 3D virtual environment provided by the game engine. Unity game engine was selected due to its low cost, rich development tool arsenal, user-friendliness, cross-platform delivery, and powerful scripting and database connectivity capabilities.

The above Figure depicts an abstract overview of the functionalities supported by DynaMus, which can be considered as a 3D content management system (3D-CMS) that offers both back-end and front-end content management functionalities. The current implementation of DynaMus supports exhibitions in the form of:

  • 2D images, which are being mapped onto flat surfaces that simulate painting frames;
  • 3D models, which can be easily manipulated and placed in the virtual environment of the exhibition.

The Uml
See the sequence diagram of DynaMus

Superb Features

Gamification

VR-like first-person 3D visualisation, navigation and interaction.

The GUI

Both exhibition visits and exhibition management are performed through a unified GUI in an attempt to provide a WYSIWYG intuitive environment.

Supported Objects

2D Images and 3D objects, obj file format.

Data Interoperability

The primary 2D image data resources that are already integrated into the framework are Google Images and cultural content from Europeana (http://europeana.eu), whereas custom user content is also supported (this Figure).

Repository Interconnection

There is no limitation in the total number of 3D and 2D resources added to the exhibition as the system only stores a URIs (or URLs) that point to remotely stored digital resources.

Workflow for reading and presenting interactive 3D models in DynaMus

Workflow for reading and presenting interactive 3D models in DynaMus

As mentioned before, the data interchange between DynaMus, the Web-based repositories (Google Images, Europeana) and internal repository (considered as the server-side of the system) are based on JSON technology. For each of the repositories a structured query subsystem was built according to the data exchange requirements. Each query subsystem is able to handle the response data structures. In addition, the server-side of DynaMus is also responsible for handling all the user requests triggered through the 3D virtual environment and related to the development of the exhibitions’ environment. More specifically, the server-side of DynaMus provides a number of services in the form of PHP requests. This approach enables to perform all communications using string-parsing functionality. Thus, any user requests are driven through the GUI to the server-side of DynaMus and the server triggers dedicated C# scripts that allow posting (POST) string queries to PHP services that respond with structured string-formatted data. Figure 4 depicts the process of the development and visualisation of the elements found in an exhibition. The 3D object building process is shown on the left side of the above Figure. The operation begins by providing the digital resource remote location (URL or URI). The underlying algorithm parses and analyses all the related files (Step 1) and initiates the real time 3D mesh generation (Step 2). Subsequently, the provided UV texture image is mapped on the 3D mesh (Step 3) using the information stored in the material file that usually accompanies the 3D model files. After the generation of the 3D model the user may annotate it with textual information (Step 4). The interactivity is depicted on the above Figure (right side). When a visitor selects an object, a pop-up window appears at the bottom of the screen providing the textual information that accompanies the exhibit.

Ambient Oclusion

Real-time screen space effects for enhanced visual realism

In order for the 3D environment to be more realistic, some real-time screen space effects such as Ambient Occlusion, Depth of Field, Antialiasing, Light Mapping and Shadow Rendering are being used. These can significantly affect the quality and efficiency of the experience but impose some additional graphics card hardware requirements, which, nowadays, are considered common. More specifically, Ambient occlusion is a sophisticated ray tracing calculation, which simulates soft global illumination by faking darkness perceived in corners and at mesh intersections, creases, and cracks, where light is diffused (usually) by accumulated dirt and dust. Depth of field is a common post processing effect that simulates one of the most notable properties of a camera lens: the limited depth of focus. Antialiasing , gives smoother appearance of the graphics based on the difference of coloured areas of the image. Light mapping , by which point, area, directional and spotlights shine upon every pixel displayed on the screen space. Shadow rendering , which is based on the light mapping and tries to simulate environment shadows that are associated to the light sources. The above Figure depicts the difference by using these real-time effects.