In location-based services, increasing the user's interaction with the surrounding environment can increase their knowledge of that environment. Combining these services with Augmented Reality technology is one of the ways to increase this interaction.
Augmented Reality combines virtual elements such as textual information, graphics, etc. with the real world and displays various objects from the real world with their corresponding virtual information to the user.
However, using this technology in location-based services can cause problems. For example, by increasing the volume of textual and graphical information from the surrounding environment, displaying this information on the mobile device's screen with limited sizes, causes illegibility and reduces the usefulness of the information.
Another problem with using Augmented Reality is the uniformity of the displayed information. That means, by changing the user's environmental conditions the information may change, but the displayed information through a non-context-aware system remains the same and does not change dynamically.
In this research to overcome the problems mentioned above, a combination of Augmented Reality technology and Context-awareness has been used. Context-awareness considers the user's environment and its changes and modifies the system's behavior accordingly.
To modeling the proposed system, first, we identified the effective contexts. We utilized four properties as context to guides the user in an indoor space; the distance between the user and possible target locations, rotation of the mobile device, the time that the user is using the application, and resolution. We used the Bounding Box concept to infer the resolution context.
We then collected the required data to calculate the mentioned contexts. To find the user's position and calculating distance context, we used the Pedestrian Dead Reckoning (PDR) method. This method has less dependency on the environment and its infrastructures rather than other positioning methods like positioning using Wi-Fi and Bluetooth Low Energy (BLE) sensors. PDR uses the smartphone's IMU sensors to finding the user's orientation and detecting his/her steps. In this research, we used Accelerometer, Gyroscope, and Magnetometer sensors. Magnetometer sensors are mostly affected by surrounding iron objects. So we calibrated this sensor by applying soft-iron and hard-iron calibrations. Also, we applied moving average low pass filter to regulating accelerometer raw data. Time and rotation of device collected from device clock and IMU respectively.
After calculating the contexts, to displaying appropriate information according to the user's context, we define different Levels of Detail.
This system is implemented on the 3rd floor of the Geomatics faculty at K.N. Toosi University and developed on the Android platform with Java programming language. A performance test was carried out to evaluate the performance of the system. In each application run by different users, we collected Random Access Memory(RAM) and Central Processing Unit(CPU) usage for context-aware and non-context-aware systems. The results of the performance test showed that the average RAM and CPU usage in the context-aware system respectively 37.81% and 1.83% are less than non-context aware. Also, we used a questioner and asked ten users to evaluate the system’s UI, the performance of the context-aware system, and the non-context-aware system. The results showed that users have significant satisfaction in the performance of the context-aware system.