Fragranced consumer products: effects on asthmatic Australians

Exposure to fragranced consumer products, such as air fresheners and cleaning supplies, is associated with adverse health effects such as asthma attacks, breathing difficulties, and migraine headaches. This study investigated the prevalence and types of health problems associated with exposure to fragranced products among asthmatic Australians. Nationally representative cross-sectional data were obtained in June 2016 with an online survey of adult Australians (n = 1098), of which 28.5% were medically diagnosed with asthma or an asthma-like condition. Nationally, 55.6% of asthmatics, and 23.9% of non-asthmatics, report adverse health effects after exposure to fragranced products. Specifically, 24.0% of asthmatics report an asthma attack. Moreover, 18.2% of asthmatics lost workdays or a job in the past year due to fragranced products in the workplace. Over 20% of asthmatics are unable to access public places and restrooms that use air fresheners. Exposure to fragranced products is associated with health problems, some potentially serious, in an estimated 2.2 million asthmatic adult Australians. Asthmatics were proportionately more affected than non-asthmatics (prevalence odds ratio 3.98; 95% confidence interval 3.01–5.24). Most asthmatics would prefer workplaces, healthcare facilities, and environments that are fragrance-free, which could help reduce adverse effects. Electronic supplementary material The online version of this article (10.1007/s11869-018-0560-x) contains supplementary material, which is available to authorized users.


IRB approval
Mention whether the study has been approved by an IRB.
Informed consent Describe the informed consent process.
Where were the participants told the length of time of the survey, which data were stored and where and for how long, who the investigator was, and the purpose of the study?
Data protection If any personal information was collected or stored, describe what mechanisms were used to protect unauthorized access.

Development and testing
State how the survey was developed, including whether the usability and technical functionality of the electronic questionnaire had been tested before fielding the questionnaire. Contact mode Indicate whether or not the initial contact with the potential participants was made on the Internet. (Investigators may also send out questionnaires by mail and allow for Webbased data entry.)

Item Category Checklist Item Explanation
Advertising the survey How/where was the survey announced or advertised? Some examples are offline media (newspapers), or online (mailing lists -If yes, which ones?) or banner ads (Where were these banner ads posted and what did they look like?). It is important to know the wording of the announcement as it will heavily influence who chooses to participate. Ideally the survey announcement should be published as an appendix.

Survey administration
Web/E-mail State the type of e-survey (eg, one posted on a Web site, or one sent out through e-mail

Checklist for Reporting Results of Internet E-Surveys (CHERRIES) Item Category Checklist Item Explanation
responses to other items) to reduce number and complexity of the questions.

Number of Items
What was the number of questionnaire items per page? The number of items is an important factor for the completion rate.

Number of screens (pages)
Over how many pages was the questionnaire distributed? The number of items is an important factor for the completion rate.

Completeness check
It is technically possible to do consistency or completeness checks before the questionnaire is submitted. Was this done, and if "yes", how (usually JAVAScript)? An alternative is to check for completeness after the questionnaire has been submitted (and highlight mandatory items). If this has been done, it should be reported. All items should provide a non-response option such as "not applicable" or "rather not say", and selection of one response option should be enforced.

Review step
State whether respondents were able to review and change their answers (eg, through a Back button or a Review step which displays a summary of the responses and asks the respondents if they are correct). The number of people agreeing to participate (or submitting the first survey page) divided by the number of people submitting the last questionnaire page. This is only relevant if there is a separate "informed consent" page or if the survey goes over several pages. This is a measure for attrition. Note that "completion" can involve leaving questionnaire items blank. This is not a measure for how completely questionnaires were filled in. (If you need a measure for this, use the word "completeness rate".)

Preventing multiple entries from the same individual
Cookies used Indicate whether cookies were used to assign a unique user identifier to each client computer. If so, mention the page on which the cookie was set and read, and how long the cookie was valid. Were duplicate entries avoided by preventing users access to the survey twice; or were duplicate database entries having the same user ID eliminated before analysis? In the latter case, which entries were kept for analysis (eg, the first entry or the most recent)?
IP check Indicate whether the IP address of the client computer was used to identify potential duplicate entries from the same user. If so, mention the period of time for which no two entries from the same IP address were allowed (eg, 24 hours). Were duplicate entries avoided by preventing users with the same IP address access to the survey twice; or were duplicate database entries having the same IP address within a given period of time eliminated before analysis? If the latter, which entries were kept for analysis (eg, the first entry or the most recent)?
Log file analysis Indicate whether other techniques to analyze the log file for identification of multiple entries were used. If so, please describe.

Item Category Checklist Item Explanation
Registration In "closed" (non-open) surveys, users need to login first and it is easier to prevent duplicate entries from the same user. Describe how this was done. For example, was the survey never displayed a second time once the user had filled it in, or was the username stored together with the survey results and later eliminated? If the latter, which entries were kept for analysis (eg, the first entry or the most recent)?

Handling of incomplete questionnaires
Were only completed questionnaires analyzed? Were questionnaires which terminated early (where, for example, users did not go through all questionnaire pages) also analyzed?
Questionnaires submitted with an atypical timestamp Some investigators may measure the time people needed to fill in a questionnaire and exclude questionnaires that were submitted too soon. Specify the timeframe that was used as a cut-off point, and describe how this point was determined.
Statistical correction Indicate whether any methods such as weighting of items or propensity scores have been used to adjust for the nonrepresentative sample; if so, please describe the methods.