Checklist for Reporting Results of Internet E-Surveys (CHERRIES) for NCT04832932
Checklist Item |
Explanation |
Page Number |
Describe survey design |
Describe target population, sample frame. Is the sample a convenience sample? (In “open” surveys this is most likely.) |
The sample is a convenience sample. Survey was “open” and designed to take minimal time to complete, roughly about a minute or less. |
IRB approval |
Mention whether the study has been approved by an IRB. |
The study has been approved by MEBO Research Institutional Review Board (IRB) in 2021. Protocol # 20210103MEBO |
Informed consent |
Describe the informed consent process. Where were the participants told the length of time of the survey, which data were stored and where and for how long, who the investigator was, and the purpose of the study? |
The participants were provided a link to the informed consent form. They were informed of all details, including contact information for PI and study coordinator. |
Data protection |
If any personal information was collected or stored, describe what mechanisms were used to protect unauthorized access. |
Data protection was ensured with password protection, encryption and versioning software, as well as limited access (study participants that also served as study investigators were trained in human protection and did not have access to individual-level information of participants from other groups), and by comprehensive monitoring on all access activity to data. |
Development and testing |
State how the survey was developed, including whether the usability and technical functionality of the electronic questionnaire had been tested before fielding the questionnaire. |
The experimental design of this study and the format of the paper follow the Consolidated Standards of Reporting Trials statement for reporting randomized controlled trials and the Consolidated Standards of Reporting Trials of Electronic and Mobile Health Applications and Online Telehealth (CONSORT-EHEALTH) checklist |
Open survey versus closed survey |
An “open survey” is a survey open for each visitor of a site, while a closed survey is only open to a sample which the investigator knows (password-protected survey). |
We used “open survey” - open for each visitor of a site – each patient’s subgroups, however, was given a different link with more relevant questions to their condition. |
Contact mode |
Indicate whether or not the initial contact with the potential participants was made on the Internet. (Investigators may also send out questionnaires by mail and allow for Web-based data entry.) |
Initial contact with participants was made on the Internet, digitally-isolated individuals were contacted by study investigators in person. |
Advertising the survey |
How/where was the survey announced or advertised? Some examples are offline media (newspapers), or online (mailing lists – If yes, which ones?) or banner ads (Where were these banner ads posted and what did they look like?). It is important to know the wording of the announcement as it will heavily influence who chooses to participate. Ideally the survey announcement should be published as an appendix. |
The survey was announced on local google groups, NextDoor, Facebook, on MEBO and Aurametrix blogs and other support groups. The study was posted on clinicaltrials.gov thus attracting a few patient referrals via this site. |
Web/E-mail |
State the type of e-survey (eg, one posted on a Web site, or one sent out through e-mail). If it is an e-mail survey, were the responses entered manually into a database, or was there an automatic method for capturing responses? |
e-Survey was Web-based; some responses were entered manually into a database. |
Context |
Describe the Web site (for mailing list/newsgroup) in which the survey was posted. What is the Web site about, who is visiting it, what are visitors normally looking for? Discuss to what degree the content of the Web site could pre-select the sample or influence the results. For example, a survey about vaccination on a anti-immunization Web site will have different results from a Web survey conducted on a government Web site |
Survey was available on google platform. It was not asking for identifiable information nor e-mails. Some of potential participants were, however, reluctant to use it because google displays users e-mails even if the e-mail is not shared with the survey owner. The goal of this study was to promote immunizations. However, we were able to reach out to individuals delaying or skipping vaccinations. The survey was not aggressively promoted. |
Mandatory/voluntary |
Was it a mandatory survey to be filled in by every visitor who wanted to enter the Web site, or was it a voluntary survey? |
The survey was voluntary. |
Incentives |
Were any incentives offered (eg, monetary, prizes, or non-monetary incentives such as an offer to provide the survey results)? |
No incentives were offered. |
Time/Date |
In what timeframe were the data collected? |
January 3 2021 – August 12 2022 |
Randomization of items or questionnaires |
To prevent biases items can be randomized or alternated. |
Items or questionnaires were not randomized. |
Adaptive questioning |
Use adaptive questioning (certain items, or only conditionally displayed based on responses to other items) to reduce number and complexity of the questions. |
Adaptive questioning was not used, instead participants from different subgroups were asked slightly different questions. |
Number of Items |
What was the number of questionnaire items per page? The number of items is an important factor for the completion rate. |
Version 2 had only 3 questions total, the largest - generic - variant of the survey had 11 questions total, all optional. |
Number of screens (pages) |
Over how many pages was the questionnaire distributed? The number of items is an important factor for the completion rate. |
One or two, depending on the version |
Completeness check |
It is technically possible to do consistency or completeness checks before the questionnaire is submitted. Was this done, and if “yes”, how (usually JAVAScript)? An alternative is to check for completeness after the questionnaire has been submitted (and highlight mandatory items). If this has been done, it should be reported. All items should provide a non-response option such as “not applicable” or “rather not say”, and selection of one response option should be enforced. |
Consistency or completeness checks were not done, to let more people participate. Incomplete entries were not analyzed. |
Review step |
State whether respondents were able to review and change their answers (eg, through a Back button or a Review step which displays a summary of the responses and asks the respondents if they are correct). |
Yes, responders could change their answers, submit multiple answers and contact their investigator if any questions. |
Unique site visitor |
If you provide view rates or participation rates, you need to define how you determined a unique visitor. There are different techniques available, based on IP addresses or cookies or both. |
We did not use any cookies and were not recording IP addresses. Unique responses were determined by unique IDs linked to valid entries. |
View rate (Ratio of unique survey visitors/unique site visitors) |
Requires counting unique visitors to the first page of the survey, divided by the number of unique site visitors (not page views!). It is not unusual to have view rates of less than 0.1 % if the survey is voluntary. |
Ratio of unique survey visitors/unique site visitors varied for every site. It was over 50% for private neighborhood google groups and NextDoor forums, 10-20% for chronic disease support groups and 0.1%-3% for Facebook/Reddit groups. |
Participation rate (Ratio of unique visitors who agreed to participate/unique first survey page visitors) |
Count the unique number of people who filled in the first survey page (or agreed to participate, for example by checking a checkbox), divided by visitors who visit the first page of the survey (or the informed consents page, if present). This can also be called “recruitment” rate. |
We can’t estimate “recruitment rate” based on web visits to google forms since we were not collecting this information. |
Completion rate (Ratio of users who finished the survey/users who agreed to participate) |
The number of people submitting the last questionnaire page, divided by the number of people who agreed to participate (or submitted the first survey page). This is only relevant if there is a separate “informed consent” page or if the survey goes over several pages. This is a measure for attrition. Note that “completion” can involve leaving questionnaire items blank. This is not a measure for how completely questionnaires were filled in. (If you need a measure for this, use the word “completeness rate”.) |
Completion rate was close to 100% (Ratio of users who entered information on the first page vs last page) since survey did not have many questions. |
Cookies used |
Indicate whether cookies were used to assign a unique user identifier to each client computer. If so, mention the page on which the cookie was set and read, and how long the cookie was valid. Were duplicate entries avoided by preventing users access to the survey twice; or were duplicate database entries having the same user ID eliminated before analysis? In the latter case, which entries were kept for analysis (eg, the first entry or the most recent)? |
Cookies were not used. Incomplete entries that could not be linked to enrolled participants were not analyzed. |
IP check |
Indicate whether the IP address of the client computer was used to identify potential duplicate entries from the same user. If so, mention the period of time for which no two entries from the same IP address were allowed (eg, 24 hours). Were duplicate entries avoided by preventing users with the same IP address access to the survey twice; or were duplicate database entries having the same IP address within a given period of time eliminated before analysis? If the latter, which entries were kept for analysis (eg, the first entry or the most recent)? |
IP check was not performed |
Log file analysis |
Indicate whether other techniques to analyze the log file for identification of multiple entries were used. If so, please describe. |
Log file analysis was not performed |
Registration |
In “closed” (non-open) surveys, users need to login first and it is easier to prevent duplicate entries from the same user. Describe how this was done. For example, was the survey never displayed a second time once the user had filled it in, or was the username stored together with the survey results and later eliminated? If the latter, which entries were kept for analysis (eg, the first entry or the most recent)? |
Since it was an open survey, there was no need in registration |
Handling of incomplete questionnaires |
Were only completed questionnaires analyzed? Were questionnaires which terminated early (where, for example, users did not go through all questionnaire pages) also analyzed? |
Only submissions that could be linked to entries for enrolled participants were analyzed. |
Questionnaires submitted with an atypical timestamp |
Some investigators may measure the time people needed to fill in a questionnaire and exclude questionnaires that were submitted too soon. Specify the timeframe that was used as a cut-off point, and describe how this point was determined. |
There were no answers submitted with an atypical timestamp. Couples’ entries were usually separated by a minute. |
Statistical correction |
Indicate whether any methods such as weighting of items or propensity scores have been used to adjust for the non-representative sample; if so, please describe the methods. |
Not applicable. |
This checklist has been modified from Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res. 2004 Sep 29;6(3):e34 [erratum in J Med Internet Res. 2012; 14(1): e8.]. Article available at https://www.jmir.org/2004/3/e34/; erratum available https://www.jmir.org/2012/1/e8/. Copyright ©Gunther Eysenbach. Originally published in the Journal of Medical Internet Research, 29.9.2004 and 04.01.2012.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited.