We didn't just assume DMD was better than the competition, we went back to dentists and got their opinions. We gave 8 dentists the tablet, showed them the basic features of the system, and asked them to plan a complex treatment plan for a patient they had never seen. Keep in mind they did this without the actual patient in front of them, making the whole task that much more complex.
Watching the dentists solve the case was enlightening, but we wanted to be sure our opinions were colored. So we gave each dentist the Questionnaire for User Interface Satisfaction (QUIS) developed by the University of Marlyand. This survey asks about 30 questions, and gives some room for free form comments. The chart below shows the average ranking for the first seven survey question
As you can see from the chart, the responses were extremely positive. All eight dentists who tested our system and filled out this survey thought it was fairly close to ‘wonderful.’ High marks on the rest of the ratings as well. In fact, in none of the 30 questions did the average score fall below 7. More rigorous data is needed to build a statistically significant case, but this is a convincing.
User Opinions After the survey, we asked people what they thought. Here are some of the things we heard:
- "Wow, this is too good to be true!"
- "As easy as can be!"
- "This is a dentist's playtoy!"
Subjective opinions of first time users are one important aspect, but many dentists who use such a system will be using it for a long time. They will become expert users, and efficiency becomes much more important than discoverability. So how do we know how efficient our interface is? Well one way is Keystroke Level Modeling.
Keystroke Level Modeling (KLM) answers the following question: “For an expert user of the interface, how long will it take for him or her to perform a given task?” There is no representation of the user’s higher level goals and content of thoughts, just a sequence of low-level actions that perform a task (typically using keyboard and mouse, but possibly also touchscreen, microphone or other input and output devices). The model attempts to accurately represent the times for the user’s cognitive, perceptual and motor processes necessary to perform actions, and can also consider system wait times.
Using a software called CogTool, which calculates the amount of time a task would take in a given system, we compared the efficiency of our system to Patterson Eaglesoft, version 14. Only features which are in both systems could be compared. We performed a series of tasks in each system and compared the times generated by CogTool. Most of these tasks were related to radiographs, as there were several similar functions in both systems. Our system took about 30 seconds, while Eaglesoft took about 105 seconds. See the report for all the gory details.