TY - GEN
T1 - 3D positioning system based on one-handed thumb interactions for 3d annotation placement
AU - Tashiro, So
AU - Uchiyama, Hideaki
AU - Thomas, Diego
AU - Taniguchi, Rin Ichiro
N1 - Funding Information:
A part of this work is supported by JSPS KAKENHI, Grant Number JP17H01768 and JP18H04125.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/3
Y1 - 2019/3
N2 - This paper presents a 3D positioning system based on one-handed thumb interactions for simple 3D annotation placement with a smart-phone. To place an annotation at a target point in the real environment, the 3D coordinate of the point is computed by interactively selecting the corresponding points in multiple views by users while performing SLAM. Generally, it is difficult for users to precisely select an intended pixel on the touchscreen. Therefore, we propose to compute the 3D coordinate from multiple observations with a robust estimator to have the tolerance to the inaccurate user inputs. In addition, we developed three pixel selection methods based on one-handed thumb interactions. A pixel is selected at the thumb position at a live view in FingAR, the position of a reticle marker at a live view in SnipAR, or that of a movable reticle marker at a freezed view in FreezAR. In the preliminary evaluation, we investigated the 3D positioning accuracy of each method.
AB - This paper presents a 3D positioning system based on one-handed thumb interactions for simple 3D annotation placement with a smart-phone. To place an annotation at a target point in the real environment, the 3D coordinate of the point is computed by interactively selecting the corresponding points in multiple views by users while performing SLAM. Generally, it is difficult for users to precisely select an intended pixel on the touchscreen. Therefore, we propose to compute the 3D coordinate from multiple observations with a robust estimator to have the tolerance to the inaccurate user inputs. In addition, we developed three pixel selection methods based on one-handed thumb interactions. A pixel is selected at the thumb position at a live view in FingAR, the position of a reticle marker at a live view in SnipAR, or that of a movable reticle marker at a freezed view in FreezAR. In the preliminary evaluation, we investigated the 3D positioning accuracy of each method.
UR - http://www.scopus.com/inward/record.url?scp=85071840582&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071840582&partnerID=8YFLogxK
U2 - 10.1109/VR.2019.8797979
DO - 10.1109/VR.2019.8797979
M3 - Conference contribution
AN - SCOPUS:85071840582
T3 - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
SP - 1181
EP - 1182
BT - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019
Y2 - 23 March 2019 through 27 March 2019
ER -