Spatial Updating Of Self-Position And Orientation During Real, Imagined, And Virtual Locomotion (1998)
For efficient navigational search humans require full physical movement but not a rich visual scene.
The research done by Klatzky, Loomis, et al. as well as that of Ruddle and Lessels both indicate that proprioceptive feedback is a crucial factor in our ability to accurately gauge distance, size, heading and rate of movement. While my own experience echoes their findings, the reading left me with a couple questions.
First, what are the differentiating factors that separate those of us with what appears to be a more developed innate sense of direction (or perhaps a more advanced and accurate sense of place) from those who have difficulty with even the most rudimentary way finding operations when removed from familiar environments? Are there biological factors at play, or are the differences primarily acquired through life experience? Can we train to be more receptive to proprioceptive feedback, learn to interpret it more accurately, or are we consigned to our genetically disposed navigational fates, as it were?
Second, what are the implications of this kind of research on purely digital mediums, in which physical feedback is minimal or nonexistent? Are there natural corollaries between physical locomotion or visual perception and our navigation of pages on the web or through various states of an application? Could connections between these modes of ‘navigation’ be yoked more efficiently, enhancing a user’s innate ability to find their way about within a complex application?
What do you think?
While reading some material recently I was particularly struck by a correlation between the sentiment of “designing for monochrome first” (for color deficient users) and the design movement termed “Mobile First”.
In both cases, the designer aims to build their interface elements such that the largest % possible of users will have accessibility to the data, based on the real and perceived limitations of the environmental factors imposed. After baseline accessibility and usability you can worry about nuances and aesthetics.
Monochromatic vision strips the designer of color-based tools and techniques, forcing you to fallback to the use of shape, contour, contrast and pattern. The Mobile First design sensibility forces the designer to carefully prioritize what elements of the design are truly needed to accomplish the goal(s) of the product, framed within the limits imposed by a smaller display. You also have to consider the contextual differences in usage between a mobile device and a desktop computer, and within the vastly different feature sets of modern devices.
I often hear about project constraints in terms of drawbacks, of obstacles to building the perfect widget. It’s much more helpful to think of constraints as helpful wayfinding elements on the road to successful project definition. If you know what they are, you won’t waste time in rabbit holes and you’ll be able to focus your time and attention on crafting the best product possible – one that will meet the unique needs of your users, whether or not they can see colors and regardless of what they’re using to access your offerings.