[HTML payload içeriği buraya]
31.4 C
Jakarta
Wednesday, May 13, 2026

A framework for unobtrusive interplay with proactive AR brokers


Outcomes

We in contrast Smart Agent to a standard, voice-controlled AR assistant baseline. We measured cognitive load utilizing the NASA Process Load Index (NASA-TLX), general usability with the System Usability Scale (SUS), consumer desire on a 7-point Likert scale, and complete interplay time.

Probably the most important discovering was the discount in cognitive workload. The NASA-TLX knowledge confirmed that on a 100-point scale for psychological demand, the typical rating for Smart Agent was 21.1, in comparison with 65.0 for the baseline with a statistically important distinction (𝑝 < .001). We noticed an analogous important discount in perceived effort (𝑝 = .0039), which means that the proactive system efficiently offloaded the psychological work of forming a question.

Relating to usability, each techniques carried out properly, with no statistically important distinction between their SUS scores (𝑝 = .11). Nevertheless, members expressed a powerful and statistically important desire for Smart Agent (𝑝 = .0074). On a 7-point scale, the typical desire score was 6.0 for Smart Agent, in comparison with 3.8 for the baseline.

For the interplay time, logged from the second a immediate was triggered to the ultimate system response to the consumer’s enter, the baseline was quicker (μ = 16.4s) in comparison with Smart Agent (μ = 28.5s). This distinction is an anticipated trade-off of the system’s two-step interplay move, the place the agent first proposes an motion and the consumer then confirms it. The robust consumer desire for Smart Agent suggests this trade-off was acceptable, notably in social contexts the place discretion and minimal consumer effort have been necessary.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles