Going Beyond the Basics r 103 in .NET Integrated gs1 datamatrix barcode in .NET Going Beyond the Basics r 103

Going Beyond the Basics r 103 use none none printer toprint none on none datamatrix Figure 3.14 Example of Gratuitous Graphic. the content of the drawing may convey more than is implied by the question. Much more than simply the volume of alcohol is conveyed by the image it implies both smoking and drinking, it suggests a mix of hard liquor (gin and vodka) and beer, and the crushed cans and overturned bottles suggest that this was some party! The next example (see Figure 3.15) comes from a Swedish health survey (see B lter and B lter, 2005), designed to elicit how much time in a day is spent a a engaged in various levels of physical activity.

Both iconic images and color are used to convey the notion of increasing levels of activity from sleep to vigorous physical exercise such as shoveling snow or mowing the lawn. But this example also illustrates the concreteness of images. Focusing just on the images gives the respondent a speci c instance of a class of activities, whereas reading the accompanying text containing several examples that may better de ne the class of activities.

The example in Figure 3.16 is similar to that in Figure 3.15, in that it uses images to represent various classes of activities.

But, in this case, the images are used in the place of words. In other words, there is no accompanying text to disambiguate the meaning of the pictures. Take a look at the third picture, which is an image of trams.

Is this meant to exclude buses, subway trains, and other forms of public transport Or is this meant to illustrate the broad category of public transport Given that mopeds and motorcycles are separate items, one might assume the former, but the balance of the list (not shown) does not include such other types of public transport. This leaves the respondent with the decision about what to include or exclude when responding to this item..

104 r Designing Effective Web Surveys Figure 3.15 Use of Color and Icons to Convey Meaning. Finally, look back at the header or banner images in Figures 3.

9, 3.10, and 3.11.

While these may be intended by the design as style elements (see Section 4.1 in the next chapter), they may be viewed as relevant to the survey by the respondent and may whether consciously or not affect their answers to the questions. Given that images are powerful attention-getting devices, they are likely to be perceived if.

Going Beyond the Basics r 105 Figure 3.16 Use of Images in Place of Words. they appear on the Web page.

If not, why put them there If so, they may in uence a respondent s answer in ways possibly unintended by the designer. What then is the research evidence of the effect of images on survey responses . 3.5. Research on Images in Web Surveys Research on images is mostly found in the world of multimedia instruction or media studies (newspapers and TV), and in advertising research. But relatively little research has been done on the role of images in Web surveys, or in surveys in general, which (with some notable exceptions) tend to rely on verbal descriptions. So, this section is as much a review of the issues and the outline of a research agenda than a de nite set of ndings that would guide decisions on the use of images.

Images have several properties that may affect the answers to an accompanying question: 1. They attract attention. 2.

They tend to be more concrete than abstract.. 106 r Design none for none ing Effective Web Surveys 3. They are a visually rich and complex information source, often open to several possible interpretations. With regard to the latter, different respondents may see or react to different elements of the image, as I have illustrated with the examples.

In other words, the image may bring to mind things that the question designer had not intended. The richer the content of the image (that is, the more information it conveys), the more likely this may be to occur. This is why we need to be particularly vigilant in the use of images.

Images could affect responses to survey questions in many different ways. For example: r They may affect category membership; using images as exemplars, the range of objects or activities implied by the question covered may be changed. r They may affect context, producing contrast or assimilation effects.

r They may affect mood or emotion. r They may make vague ideas concrete. r They may clarify or obfuscate the meaning of a question or key term.

These effects may be potentially good or bad, depending on the purpose of the question and the use of the images. Let s look at some research evidence on these issues. One of our experiments looked at the effect of images on behavioral frequency reports, using images as exemplars (Couper, Kenyon, and Tourangeau, 2004).

The pictures accompanying the survey question represented low-frequency or highfrequency instances of the behavior in question (shopping, travel, eating out). For example, when shown a picture of a couple dining in a ne restaurant, respondents reported signi cantly fewer events (eating out in the past month) on average than when they were shown a high-frequency instance of the behavior (a person eating fast food in a car). The average number of episodes per month was 11.

8 in the rst instance and 16.2 in the second. Furthermore, those exposed to the ne restaurant image reported signi cantly higher levels of enjoyment, and a signi cantly higher price paid for the last meal eaten out.

Similarly, respondents exposed to a picture of grocery shopping reported more shopping trips in the past month than those exposed to a picture of clothes shopping. Similar effects were found for several other behaviors. We argued that the images served as cues for the retrieval of relevant incidents from memory, hence affecting the frequency reports.

In a partial replication in Germany, Hemsing and Hellwig (2006) found similar effects.. Going Beyond the Basics r 107 In a subsequent set of studies (Couper, Conrad, and Tourangeau, 2007), we explored the role of images in changing the context of the survey question. Respondents saw a picture of a t woman jogging or a sick woman in a hospital bed (or no picture, in some cases) while providing an overall rating of their health. Consistent with the context effects literature, we found a signi cant contrast effect, with those seeing the sick woman reporting signi cant higher levels of health than those seeing the t woman.

Again, these effects were substantial: 43% of respondents reported themselves to be in very good health (8 10 on a 10-point scale) when exposed to the picture of the t woman, but 58% did so when shown the sick woman. When the pictures were put in the banner of the Web page, the effect was smaller than when the pictures appeared alongside the question, or appeared on a prior introductory screen, producing some support for the banner blindness hypothesis that material in banners is likely to be ignored (Benway, 1998; Benway and Lane, 1998). But the effects did not disappear, suggesting as noted earlier that even pictures that do not appear to be explicitly linked to the survey question may nonetheless in uence the answers to that question.

Witte, Pargas, Mobley, and Hawdon (2004) conducted an experiment in a Web survey, in which some respondents were shown pictures (and others not) of animals on the threatened and endangered species list. Those exposed to the pictures were signi cantly more likely to support protection for the species. Their study also has some suggestive data on the effect of photo quality.

The study included two species of sh, and inclusion of the photograph increased strong support for protection from 24% to 48% in the case of the Spot n chub, but only from 46% to 48% in the case of the Arkansas shiner. One explanation they offer for this difference is the fact that the Spot n chub stood out more sharply against the background, whereas the Arkansas shiner blended into the similarly colored background (Witte et al., 2004, p.

366). In terms of images affecting mood, we conducted a study in which respondents were asked a series of depression and mental health items from the RAND SF-36 instrument (Couper, Conrad, and Tourangeau, 2003). For some respondents, the questions were accompanied by an image of bright sunlight; for others, an image of stormy weather was presented, and still others saw no picture.

While the effects were generally modest, they were in the expected direction, with sunshine elevating respondents reported mood and rain lowering it. The mood activation appears to have the biggest effect when it is occurs prior to answering the survey questions. For example, in response to the question, During the past 4 weeks, how often have you felt downhearted and blue, 15.

1% of respondent said never in the stormy condition, and 22.6% said never in the sunny condition when the images were. 108 r Design none none ing Effective Web Surveys shown on a lead-in screen prior to the battery of items. When the images appeared alongside the questions, these were 13.8% and 15.

3%, respectively. This suggests that momentary changes in mood can be achieved by the simple expedient of the choice of image shown to respondents. Not every study on images has yielded signi cant effects on survey responses.

In an early test of the effect of images (Kenyon, Couper, and Tourangeau, 2001), we conducted a study among Knowledge Networks panel members around the American Music Awards (the Grammy Awards), with various images of nominated artists. We found no effects of any of several image manipulations. One possible explanation was the generally low level of interest in, and knowledge of, the Grammy-nominated artists among the panel members.

If you have no idea who an artist is, showing a picture may not help. However, in a survey on political knowledge, also using the Knowledge Networks panel, Prior (2002) found that adding photographs of political gures signi cantly improved performance on political knowledge tests. Thus, for example, asking people if they knew what political of ce Dick Cheney held produced fewer correct answers than when the question was accompanied by a picture of Dick Cheney.

When Cheney was shown alongside President Bush in a picture, the proportion correctly identifying his of ce increased further. What have we learned from the few studies that have been conducted so far First, it is clear that images do indeed affect the answer to survey questions, although not always. Second, these studies suggest the potential risks of including images, but no studies have yet demonstrated the value of including images.

And third, studying image effects is complex it depends both on the visual semantics and on the visual syntax. Clearly, there is much more work to be done in this area, especially in terms of nding ways that the inclusion of images may improve survey measurement or facilitate the task of answering the questions. For many of the effects demonstrated in the research reviewed here, we do not know which of the answers is the correct one or if, indeed, there is one correct answer.

We know that images change responses but we do not yet know if or how they may improve the reliability or validity of responses. Aside from the effect of images on measurement error, do they affect nonresponse or breakoff rates The presumption is that the inclusion of images makes the survey more interesting and appealing, thereby encouraging completion. But is there any research evidence to support this Gu guen and Jacob (2002a) included a digital photograph of a male or female e requester in an HTML e-mail invitation to a Web survey sent to students at a French university.

When a photograph was included, subjects were more likely. Going Beyond the Basics r 109 to comply with the request (84%) than when no photograph was included (58%). Furthermore, the female requester was helped more of the time than the male requester when a photograph was included, but there was no difference when no photograph was included. I ll return to the issue of survey invitations in 6.

In an experiment conducted as part of a Web survey on a highly sensitive topic among women (Reed, Crawford, Couper, Cave, and Haefner, 2004), different photographs of the principal investigator were used to suggest either a medical setting (using a white coat and stethoscope) or a regular academic setting (using a regular business suit). Respondents were only exposed to the photographs after they received the e-mail invitation and clicked on the URL to start the survey. No differences were found in overall survey completion.

In addition, no differences were found in the distribution of responses to sensitive questions in the survey, by the framing suggested by the photographs and accompanying text. In our experiments on the use of images described above, we have found no discernible effects on breakoffs. In other words, exposure to the pictures does not increase completion relative to those who saw no pictures.

In the Grammy Awards study (Kenyon, Couper, and Tourangeau, 2001), respondents were exposed to up to six different image manipulations. We created an index from 0 to 6 for the number of pictures seen, and examined the relationship with debrie ng questions at the end of the survey. The correlation with picture exposure and satisfaction with the survey was a mere 0.

023 (p = .29, n.s.

), while the correlation with subjective survey length was 0.057 (p .01).

Thus, while the inclusion of pictures appeared to have no impact on the subjective enjoyment of the survey, there was a modest positive association with perceived length. But these studies (except Gu guen and Jacob, 2002a) were based on existing e panels of Web users, whether opt-in or prerecruited. Their decision to start a survey, or to continue once begun, may be based on factors other than the visual appeal of the survey derived from the inclusion of images.

In other words, these studies weren t designed to test the effect of images on breakoffs and may thus be an unfair test of that relationship. To summarize the research, there is much we still don t know about how the inclusion of images may affect survey measurement error and nonresponse error. We have much to learn about when to use images and how best to do so.

As researchers learn how best to exploit the visual power to the Web to improve the quality of answers obtained, and to improve the survey experience for respondents, we are likely to see a rise in the use of images in Web surveys. I can t wait to see what develops..

Copyright © . All rights reserved.