Effect of IAD on perceived mutual gaze.
Two-dimensional portraits generate a strong perception of eye contact. I remember the posters of rock stars in my friend’s room when I was a kid. When I walked around in the room, Alice Cooper’s gaze appeared to follow me. The phenomenon, known as the Mona Lisa effect, is well studied and documented. However, as we discovered in a recent study
, the Mona Lisa effect is not the only phenomenon that explains the magic-like appeal of two-dimensional portraits of rock stars and enigmatically smiling ladies.
As a model poses for the photographer, they often look directly in the lens of the camera; eye contact (mutual gaze) creates captivating photographs. When an observer then looks at the resulting two-dimensional image, both eyes of the observer receive a direct gaze. This condition is in contrast with a natural setting where people look at either the left or the right eye of each other at a time. Thus, the two-dimensional photograph or painting provides a perception of eye contact that is stronger than possible in a three-dimensional real-life setting.
The implications of the results are manifold; two-dimensional portraits are widely used in emotion and gaze research. However, our results indicate that using two-dimensional portraits reduces the ecological validity of the results, i.e., the results may not apply in real-life situations. The results were published in the open-access journal Journal of Vision
Six stereoscopic image pairs used in the experiment. The images can be seen in 3D by ‘looking through’ the image.
Mediated facial expressions do not elicit emotions as strongly as real-life facial expressions. In particular, 2D photographs of facial expressions fail to evoke emotions as strongly as live faces, possibly due to the low fidelity of the pictorial presentation. However, we found that 3D facial expressions evoke stronger emotions than their 2D counterparts. Due to the illusion of non-mediation, natural depth levels create the strongest emotional amplifications. In this experiment, we manipulated depth magnitude by varying the distance between the two cameras providing the left and right images for the 3D presentation.
Facial expressions are commonly studied with 2D photographs, while the results are generalized to the real world. Yet stereoscopic images replicate reality more faithfully and thus are more valid stimuli. One could say that 3D photographs trick the brain into thinking that the face in a 3D photograph is more real than in the 2D photograph
Whereas the negative valence and arousal elicited by angry expressions was most significantly amplified at the most natural depth magnitude, the positive valence elicited by happy expressions was amplified in both narrowed and natural depth conditions. The findings are relevant for virtual and augmented reality 3D displays such as Oculus Rift, indicating that 3D content must preferably provide a natural depth percept to provide emotion-evoking experiences.
Currently, 3D is mostly used in action films to emphasize the effects, but it could be also employed to enhance the emotions conveyed by the actors. The article was published in the open-access journal i-Perception: http://ipe.sagepub.com/content/6/6/2041669515615071.
The passive vs. active 3D TV debate is ongoing. Both technologies have their advantages, and neither one is perfect. On one hand, active glasses provide full resolution, but are heavy, require a sync signal from the TV, and consume batteries. On the other hand, passive displays halve the vertical resolution of the TV, but the glasses are more comfortable. Furthermore, the horizontal interlacing rows of the passive display are still visible at the recommended viewing distance. Horizontal interlacing causes other problems, too, as we discovered in a recent study, published in ACM Transactions on Applied Perception.
Left, an interlaced stereoscopic image pair for parallel viewing. The oblique edge of the dark object is highlighted with a white rectangle. Right, the interlaced oblique edge displayed such that the odd pixel rows are intended for the right eye (R) and the even pixel rows are intended for the left eye (L). The white edges highlight the segment endpoints and the arrows point to the closest matches in the left eye image for the specific feature in the right eye image.
One could argue that by displaying half the pixels of the image to each eye, the resulting perceived resolution would be that of the full image. Intuitively, it would thus be beneficial to display every other pixel row of the image to the viewer. However, the visual system does not align the rows as neatly as we would like. Instead, it searches for matching features in the images from the left and right eyes. As a result, if we have an oblique edge as in the figure above, the visual system faces a choice whether to match the feature to the row above or below the feature.
In our study, some participants’ visual system preferred the match above, and others’ below. Whichever the direction was, all participants perceived depth where there should be none. The depth artifact was still visible at very small pixel sizes. To eliminate the depth artifact, the viewing distance for a 46″ HD resolution TV would have to be 7 meters (23 ft.) A more feasible solution to the problem is to average the even and odd rows of the image, effectively halving the vertical image resolution.
That is the question we set out to solve in our study whose results were published earlier this month in Springer’s journal 3D Research. The obvious reason is the chicken and egg problem: people do not have stereoscopic 3D displays. But what if they had? We gave five novice participants 3D cameras and displays for four weeks and let them use the cameras as they liked.
The number of photographs with excess disparity fell about 70% during the four weeks. The number of photographs taken each week varied only little.
The participants took a total of 699 photographs during the trial. Each week, they answered a series of questions, evaluated their attitudes towards the photography, and chose the best and worst photographs. After the trial we conducted a thorough exit interview with each participant individually. In addition to the participants’ responses, we analysed the photographs by computing the disparity maps by matching the local features (SIFT
) in the stereo pairs.
Turns out that the participants encountered several problems during the experiment. The main problem was that they took photographs at too close distances in the beginning of the trial. The distance between the lenses in the camera used in this trial was 7.5cm, which creates too large disparities in the photographs taken at normal indoors conditions. Photographs with excess disparity are extremely unpleasant to look at, and in a real use situation the initial disappointment would likely have led to the camera gathering dust on a shelf. The participants did, however, learn to avoid the excess disparities. They also commented that people in the photographs looked unnatural. Other issues were caused by the camera flash and objects at the edges of the photograph. Check out the full article for further details. Thanks to Nokia Research Center for collaboration in this project.