The field of view visualization allows the science team to specify exactly what type of image they would like to capture and manipulate angle and resolution settings directly on the map. Scientists are able to drag the central bar to a specific area of interest, which shows flight the intended target of the photograph. We then provide them with a estimate of the image quality at the target for low, medium or high resolution. In addition scientists can preview the image quality at any point in the field of view through a gradient visualization. Higher opacity color in the gradient indicates a lower number of millimeters per pixel. The tool also illustrates the limitations of the cameras to help the scientists plan within constraints. For instance, the cameras can only take images of a certain angle, and the scientists can select from one of the available settings by manupulating the field of view on the map In addition, the science team can see which areas of the map are already being captured, so they do not collect duplicate data unnecessarily.
The field of view visualization was in response to research that showed that visuals can help scientists explain their plan intent to engineers. On MER and Phoenix, engineers reported that the most useful information they could get from a scientist was an image annotated and circled with exactly what camera view the scientist wanted. Similarly on the field test, the science team was concerned with what information the data product would contain, not necessarily the settings required to obtain the data. The principle investigator would hold his hands in a V-shape up on the map, indicating exactly the view he wanted to see, and leaving the decision of what settings would get that view to the engineers. In addition, scientists at the field test were frustrated that there was no method to preview the millimeters per pixel at their target.The field of view visualization is a direct translation of these observations, clearly indicating to scientists what view they will be getting back, and clearly indicating to the flight team what the science team wants to see.
The interaction for manipulating field of view changed little over the evolution of our prototype. It was inspired directly from our observations of principle investigator's gestures during the field test, and during both paper prototyping and digital prototyping users found the interaction very intuitive.
Determining the best way to represent the options for camera resolution proved to be much more of a challenge. We originally had the central bar represent resolution, extending further out for higher resolution, and closer in for lower resolution. However we found that attempting to represent resolution with distance to be a confusing analogy.
We revised our design by replacing resolution with the concept of quality. The user would drag the central bar to their target of interest and select the desired image quality. Depending on the distance, the qualities would represent different resolutions. For example for a very close target, a high quality image could be achieved with a lower resolution, while a high quality image of a distant target would require high resolution.
The opportunity to talk with the geologists on field test revealed that, while they do not necessarily care about fine grained tool specifications, the separation of image quality versus resolution was a step too far removed. We found that the scientists cared about resolution insofar as to determine the scale of objects in the image. For example, a high resolution image would contain more millimeters per pixel for distant object than a lower resolution image. Thus, our final design fills the panorama frustum with a gradient representing approximately the millimeters per pixel at any given distance. For a lower resolution, the gradient dissipates quickly, while for a higher resolution the gradient extends much further.