Frequently Asked Questions

(Last update: November 18 2024)

Here you find answers to frequently asked questions on the procedure, experimental design, and the requirements for the software implementation of your study.

Please do not hesitate to get in contact with the project coordinators via mail, if you have any further questions.

Design submission & selection

Yes, it is possible to group participants into small teams of 2 or 4. However, the details of implementation, such as setting up waiting rooms, must be managed by the research teams (RTs) using their software. While the project coordinators will assign participants to the RTs' various conditions, forming these groups accurately will be the responsibility of the RTs.

Yes, this is possible.

No. However, the project coordinators will only recruit Prolific participants with English fluency and an approval rate of 90% or higher.

If we get more than 42 RTs, we will pick a random set of 42 RTs using a pre-registered STATA script.

Experimental Implementation (Software)

The experiment as well as instructions have to be implemented in English. RTs are responsible for hosting the experimental software and for providing the project coordinators (PCs) with:
  1. One anonymous link per experimental condition to be used on Prolific
  2. Access to the server on which the experimental software is hosted such that PCs are able to retrieve the data.
Also note that the experimental software (oTree or Qualtrics) has to be compatible with Prolific – i.e., it has to be possible to ...
  1. ... send participants to the experiment using an anonymous link,
  2. ... record their Prolific IDs, and
  3. ... redirect them back to Prolific upon completion.
Finally, the design has to adhere to Prolific's terms & conditions for researchers.

As project coordinators, we require server access to download all original data from individual experiments. For those using oTree, this involves access to the oTree admin interface, where one configures sessions and exports data. If your experiments are conducted on Qualtrics, please provide us with access to your Qualtrics survey to facilitate data retrieval. The preferred solution would be to share your surveys (Names: YourID_CONTROL and YourID_INTERVENTION) directly with our Qualtrics account: rene.schwaiger@uibk.ac.at. Alternatively, you could send us two separate *.qsf files—one for the control condition and one for the intervention condition.

Participants will be from the general population of the United States of America. They will be invited from a selection of the Prolific database which is defined as follows: (a) fluency in English, (b) approval rate above 90%. The participants who accept the invitations will be randomly allocated to the individual studies and conditions.

No, we, the project coordinators, will add one on a general welcome screen (the same for all studies).

We will implement a CAPTCHA task to all studies on a general welcome screen and exclude those that answer this incorrectly before randomizing participants to individual studies.

No. Each RT will have to provide one anonymous link per experimental condition, i.e. two links. The randomization will be done by the project coordinators for all studies. In particular, prolific participants will be randomized into up to 42x2 conditions."

No, research teams are responsible for hosting the experimental software. They must provide the project coordinators (PCs) with an anonymous link for each experimental condition to distribute to participants via Prolific. Additionally, teams must grant access to the servers hosting the software to enable PCs to retrieve data (refer to "What does giving server access to the project coordinators mean?" for details). The PCs will cover fixed payments of GBP 1.50 per participant and an average bonus of GBP 3 per participant. Should the average payments to participants exceed these amounts, the research teams must cover the excess.

The outcome measure should explicitly reflect tangible support for governmental imposition of carbon pricing. This can include support for its implementation or support for an increase in its rate to reflect the social cost of carbon. Thus, real-world support goes beyond merely stating endorsements of carbon pricing within the experimental setup or only supporting its implementation in an experimental game. The outcome measure must be characterized by clear, real-world consequences for support of government-imposed carbon pricing resulting from participants' actions.

The spirit of the project is to tackle the introduction or substantial increase in the price of carbon, aiming to advance towards "social carbon pricing". A social price of carbon refers to a monetary estimate of the economic damages associated with emitting one additional ton of carbon dioxide (CO2) into the atmosphere. This pricing mechanism aims to capture the external costs of CO2 emissions, such as changes in net agricultural productivity, human health, property damages from increased flood risk, and the value of ecosystem services due to climate change etc. By incorporating these costs into the price of carbon, it incentivizes the reduction of carbon emissions to reflect their true environmental impact.

According to a recent report from the U.S. Environmental Protection Agency, the estimates for the social cost of greenhouse gases amount to a lower bound of $120 per ton of CO2. This estimate translates roughly to an added cost of about $1 per gallon of gasoline, reflecting the broader environmental and social impacts of its use. This pricing framework helps guide regulatory actions and policy decisions aimed at mitigating climate change by reflecting the true economic costs of carbon emissions.

For more detailed analysis and context, refer to the table below or the full EPA report on the social cost of greenhouse gases (2023).
Near-term Ramsey Discount Rate, Values of SC-CO2 (social costs of CO2) are rounded to two significant figures. Source: EPA report on the social cost of greenhouse gases (2023).
SC-CO (2020 dollars per metric ton of CO2)
Emission Year Near-term rate (2.5%) Near-term rate (2.0%) Near-term rate (1.5%)
2020 120 190 340
2030 140 230 380
2040 170 270 430
2050 200 310 480
2060 230 350 530
2070 260 380 570
2080 280 410 600

Yes, we will need you to collect Prolific Participant IDs either manually, by asking participants to enter their ID in a designated field, or automatically by retrieving the ID from the URL parameters. To enable the latter, we will universally redirect participants from Prolific to RT's control or intervention conditions adding two URL parameters. (i) PROLIFIC_PID={{%PROLIFIC_PID%}} for Qualtrics, (ii) participant_label={{%PROLIFIC_PID%}} for oTree.

Two illustrative examples of how the links will look:

Control: https://your_design_control.com?PROLIFIC_PID=abc123456&participant_label=abc123456
Intervention: https://your_design_intervention.com?PROLIFIC_PID=abc1234567&participant_label=abc1234567

In qualtrics, you can retrieve the relevant parameter "PROLIFIC_PID" by setting an embedded variable with exactly this name. Please refer to the official tutorial under the following link: https://researcher-help.prolific.com/en/article/fbbd36.

If you are using oTree, the participants' Prolific IDs will automatically be stored as participant_label. Hence, we require you to not use pre-defined participant labels.

You will need to implement the Ex-post survey in your own Qualtrics or oTree instance. We have prepared a document that provides detailed instructions on how to name the variables and the values that need to be stored. You can download the Ex-post survey document using the following link: Download Ex-Post Survey.

Please prepare a completion link at the very end of the experiment (after the ex-post survey) by redirecting participants to https://manydesignscarbon.online/completed, where the actual completion code will be embedded. This will automatically redirect participants back to Prolific and log their completion.

Since we are using a U.S. sample for all designs, please use USD as the currency in your setup. Convert your payoffs from GBP to USD as follows: for the flat fee of £1.50, use $2.00 instead. For the maximum bonus payoff of £3.00, use $4.00. If your design doesn't require the £3.00 bonus payoff, convert your specific maximum bonus amount from GBP to USD yourself and round to the nearest whole or first decimal place. For example, if your bonus payoff is £1.00, use $1.30 (based on £1.00 ~ $1.33 as of September 19th, 2024).

Data usage & authorship

Initially, after data collection, the data from each experiment will be accessible only to the Research Team (RT) associated with the specific design proposal and the Project Coordinators (PCs). RTs can use the data from their design proposal, but are prohibited from releasing, publicizing, or discussing their findings until the manuscript of #ManyDesignsCarbon is published in a journal. After this period, the embargo will expire and all data will be released in the project's OSF repository under a CC-BY license and will be freely available for public use.

For the final paper, which will include analyses of RTs' design proposals and responses to Surveys A and B, the project coordinators will prepare the draft. Co-authorship will be offered to all members of each RT. However, authorship will be confined to those RTs who fulfill all project phases, including those whose design proposals are selected, those who provide suitable and feasible experimental software, complete both surveys, and submit their data by the designated deadlines. RT members will have one 10 days to review any paper drafts before they are submitted for publication.