The Quick Environmental Exposure and Sensitivity Inventory (QEESI) was developed as a screening questionnaire for multiple chemical intolerances (MCI). The instrument has four scales: Symptom Severity, Chemical Intolerances, Other Intolerances, and Life Impact. Each scale contains 10 items, scored from 0 = “not a problem” to 10 = “severe or disabling problem.” A 10-item Masking Index gauges ongoing exposures that may affect individuals’ awareness of their intolerances as well as the intensity of their responses to environmental exposures. 
The Challenge
While the QEESI is scientifically proven, the questionnaire was intended to be completed on paper and later reviewed by a physician. My team conducted research to help guide the stakeholders with the transition to an online form with a user-centered focus that would expand availability to a larger population.
My Role
I worked on a team with other MSIS candidates, Eric Chi, Wei Cheng, Ken Copelin, and Peidi Sun. I was involved in all roles of the project but my primary roles are indicated with an * throughout the process. 
The Tools
The QEESI Team is looking for data on how to continue transitioning the QEESI into an online questionnaire that assists potential patients in the discovery, tracking, and treatment of chemical intolerances.
The Process
Research goals
After building familiarity with the history and current status of the QEESI, our team decided to approach the research with three main themes in mind:
Is the QEESI communicated effectively? Are there problems with the instructions or explanations included in the QEESI?
Is the QEESI formatted in a way that promotes the quick and accurate identification of chemical sensitivities?
Is the QEESI adhering to the current standards set by other questionnaires? Is it secure, accessible, and responsive?
Research approach
The QEESI is a validated questionnaire and, as such, has been proven effective in the assessment of patient sensitivities. Referencing the two online versions of the QEESI and success of the paper form QEESI, our research team primarily focused on the experience of finding, completing, and interpreting the QEESI. This experience would guide the recommendations necessary to ensure anyone can complete and pursue the proper treatment for their sensitivities. To evaluate the experience, the research team performed a Heuristic Evaluation, Competitive Analysis, and conducted User Interviews and Usability Testing. ​​​​​​​
Heuristic evaluation
Two members of the research team conducted a lengthy Heuristic Evaluation to determine whether the current QEESI complies with the current usability standards. Four of the ten standard heuristics (popularized by Jakob Nielsen) stood out during the evaluation.
Visibility of System Status
The QEESI consists of 50 questions divided into five sections, including Chemical Exposure, Other Exposure, Symptoms, Masking Index, and Impact of Sensitivities. However, there is very little visibility offered to the user while they are completing the QEESI. Providing visibility to the user promotes communication and transparency. The importance of the system status is magnified by the effort required to complete the QEESI. The questions often use medical terminology that most users may be unfamiliar with. As they continue to progress through the questionnaire, the user can often feel as if the questions “mold” together due to their repetitive nature.
Recognition Rather than Recall
The format of the QEESI is instruction/multiple-choice questions. The user is forced to scroll “above the fold” to reconfirm that they are correctly answering the questions. There should be context and visibility in the user’s focus of attention when completing the form. This process should repeat as instructions in different sections change.
Consistency and Standards
Promoting consistency and standardization in a questionnaire like this is difficult due to the terminology necessary for accurate completion. However, grouping questions of a similar format or context can provide clarity with repetition throughout the questionnaire. The appropriate use of location, alignment, coloring, and other industry standards will assist the QEESI in promoting accurate user completion.
Error Prevention
Error prevention is paramount in a medical questionnaire. Compensating for potential user errors can come in many different ways. Limiting progression until a section is complete, providing appropriate feedback about user inputs, and including context and definitions as needed are all methods of potential error prevention
•Provide a progress bar/sidebar to assist the user in navigating the QEESI.
•Ensure the user is prompted with a summary or set of instructions to establish understanding.
•Don’t overwhelm the user, provide a more limited amount of data to promote focus, and reduce effort level associated with answering questions.
• Maintain consistency across sections where applicable. Take advantage of common design standards to promote ease of use.
• Reduce the occurrence of errors through feedback. This should be in the form of answer completion and increasing context to assist the user in correctly assessing their sensitivities
Competitive analysis*
Direct Competitors​​​​​​​
Direct competitors were chosen based on their consumer base. WebMD, Mayo Clinic, and Psychology Today serve users interested in healthcare information that are often unfamiliar with the formal terminology used in medical diagnosis. WebMD uses a wizard-style format to guide the user through questions that provide the user with a list of common ailments.
Additionally, both WebMD and the Mayo Clinic provide a consistent status and format for their users. This allows them to become familiar with a repeatable process, thus lowering the required mental effort to navigate the wizard.
Indirect Competitors
The research team also analyzed popular websites that use or create questionnaires, including Google Forms, Qualtrics, and 16Personalities. The functions provided by these questionnaire services highlight the need for the questionnaire to be flexible and promote convenience for the user. Applying question logic and the appropriate error prevention, they effectively reduce the amount of time and effort necessary to accurately complete the questionnaires.
•Small portions of tasks require less effort than larger portions.
•Wizard-style interfaces provide ample space for instructions and offer the convenience of repeatable processes.
•The most successful questionnaires provide multiple forms of feedback.
Usability testing*
Average SUS Score: 68
QEESI SUS Average: 40
(7 Participant scores ranging from 22.5 - 62.5)
Task 1 - Navigate from your home page to the QEESI Questionnaire
For Task 1, we directed each research participant to navigate to the QEESI homepage and initiate the assessment. On a scale from 1 (very difficult) to 5 (very easy), the average rating for this task was 3.5. While all participants successfully navigated to the correct page, many stated that they had difficulty with the information provided. One participant was unable to locate the “Take the QEESI” button initially, another participant navigated to the TILT Test website initially, while another wished there had been an explanation of the QEESI somewhere on the home page. Some of this information is provided but many of the participants did not fully read the instructions or found them too wordy to spend much time on. One participant did appreciate the domain being a .org, promoting their trust of the website.
Task 2 - Complete the QEESI Questionnaire
During the process of completing the QEESI, much of the feedback related to confusion that was experienced by the participants in relation to the rating scale, question wording, and symptom definitions. On a scale of 1 (very difficult) to 5 (very easy) the average score was 3.2. However, when asked about their confidence about completing the QEESI, many participants were doubtful:
Task 3 - Interpret the results of the QEESI Questionnaire
To determine if the data provided by the QEESI was used correctly, we presented the participants with two tasks. First, they were asked to rate their intolerances from the highest severity to lowest. However, the researchers presented four participants with the default spider chart and three participants with a bar chart during the exercise. Afterward, all participants were asked which chart they preferred. All seven participants successfully ranked their intolerances, but only one preferred the spider chart. Then we asked the participants to calculate their MCI using the default table provided by the QEESI. Two of the seven participants successfully calculated their MCI. On a scale of 1 (very difficult) and 5 (very easy), the average participant rating was 1.5. 
•Access should be simple, and instructions should be upfront and friendly.
•Reduce the cognitive load on the participants by providing context, reducing answer choices, and providing real examples for ratings.
•Use common formats to display information to promote understanding.
• Leveraging technology/adaptability is beneficial for both the provider and consumer of questionnaires.
user interviews*
A portion of the Interview was dedicated to usability testing, as discussed earlier. However, the researchers spent a part of this time probing for opinions on the QEESI. The following are quotes provided during the specific tasks mentioned earlier:
Task 1 - In the process of locating the QEESI page and initiating the questionnaire
“There is too much to read.”
“I expected more information here.” (home page)
“I can’t tell what tab I’m on, they should highlight this.” (how/about/interpret)
Task 2 - In the process of completing the QEESI
“What is a disabling symptom?”
“1-10 is too many, I don’t even know what any of them mean.”
“What do I put if I’m not sure?”
Task 3.1 - In the process of Interpreting their results on the spider or bar chart
“What is the green?”
“I don’t know what to do with this information.”
“This bar chart is pretty easy to understand.”
Task 3.2 - In the process of calculating their MCI
“I don’t know what the MCI even is.”
“I would need help understanding this data.”
“What do these statistics have to do with this?”
The Results​​​​​​​
The following are the recommendations to improve the usability of Qeesi:
Front page redesign*​​​​​​​
Form redesign Solutions (Wizard Format)*​​​​​​
We believe that adopting a wizard format for the QEESI would best suit the users' needs and provide the most accurate data for healthcare professionals. This format allows the user to focus on the sole element presented to them. This would increase exposure to the instructions, specific questions, and the context necessary to correctly complete the QEESI. The additional white space can also be used to improve context through the use of definitions, examples, and other beneficial information as testing suggests.
BEfore and after exposure REdesign Solutions (Progress Visibility)*​​​​​​​​​​​​​​
Regardless of the format that is adopted, feedback should always be provided to the user in the form of numbering, a completion percentage, or a progress bar. By providing the user with a guideline for how long the process should take and including status visibility along the way, we create a pattern where the user is consistently informed about their status as well as their chemical sensitivities.
Provide context with a hover (Contextual Support)*​​​
The most significant issue we saw with the QEESI was the inability for users to grasp the context of the instructions and questions. There were inquiries about specific terms, answer choices, as well as the applicability of some questions specific to the users. While we understand that the language used is likely necessary, we believe that steps could be taken to inform the user more appropriately. This can be achieved through the act of providing definitions for medical terms, providing examples of sensitivity levels, and providing links to appropriate resources. ​​​​​​​
Results redesign solution*

Other Projects

Back to Top