Aug 04 2021

Response to Gold Standard

We are publishing a response to a public criticism of our analysis of soil carbon offset protocols. Response prepared by Freya Chay, Jeremy Freeman, and Danny Cullenward.

Background

Gold Standard’s Chief Technical Officer, Owen Hewlett, wrote a public criticism of our recent analysis of soil carbon offset protocols. We write here to address the points Mr. Hewlett raised. We thank him for his feedback and the opportunity to make two important corrections to our original analysis; however, we have also confirmed with Gold Standard that Mr. Hewlett incorrectly described the requirements of his organization’s own protocol in a manner that confirms the importance of independent analysis from financially disinterested organizations.

We welcome the chance to improve our work and suggest this exchange highlights the profound challenge facing buyers and other interested parties attempting due diligence. Before we address these broader points, however, we want to highlight the two corrections Mr. Hewlett’s comments have prompted us to make.

Correction 01 — Environmental and social safeguards

First, Mr. Hewlett correctly points out that Gold Standard has an environmental and social safeguards policy document that applies to all of its protocols. Our initial publication incorrectly concluded that Gold Standard’s soil carbon protocol did not have these safeguards because we overlooked this separate document in our review process.

We have since reviewed Gold Standard’s requirements and concluded that the protocol requires robust environmental and social safeguards, although we could not locate any data privacy protections in this or any other Gold Standard document. Establishing at least two of three safeguard categories leads to a score of 3 out of 3 points for Safeguards, so we have updated our analysis and summary findings accordingly. We regret the error and appreciate Mr. Hewlett’s correction.

As we noted in our original post, the fragmented nature of protocol documentation makes evaluation challenging. A few protocols are described in self-contained documents that facilitate robust evaluation. In other cases (including Gold Standard’s), protocols involve a web of cross-linked documents and policy standards that apply across multiple protocol domains.

As researchers, it is ultimately our responsibility to navigate this complexity, but the fact that it is often difficult to identify the universe of relevant policy documents indicates a broader problem. Complexity makes the diligence process time- and resource-intensive, not simple and clear. That’s a problem in a sector, like soil carbon, where there are documented examples of organizations using complexity to render their true standards opaque. (To be clear, we have no reason to think Gold Standard is doing so.)

Correction 02 — Additionality score definition

Second, Mr. Hewlett describes CarbonPlan’s analysis as concluding that Gold Standard “doesn't require or is ‘transparently cynical’ about additionality.” We presume this is because we gave Gold Standard’s protocol a score of 1 out of 3 possible points for Additionality. Mr. Hewlett’s comments have prompted us to clarify our definition, but not to change the application of that definition nor the score Gold Standard received.

Our original definition for a score of 1 out of 3 points for Additionality read:

No additionality test or a transparently cynical and ineffective test that is designed to create the appearence [sic] of a robust additionality screen.

Our intention with this definition was to assign protocols a score of 1 of 3 points under any of three separate conditions: (1) the lack of any additionality test (e.g. BCarbon), (2) the inclusion of a transparently cynical additionality test (e.g. the Climate Action Reserve), or (3) an ineffective test (e.g. Gold Standard). Because we failed to properly copyedit the definition, it did not clearly distinguish between outcomes (2) and (3). To address that issue, we have updated the score definition as follows:

No additionality test, or an ineffective additionality test, or a transparently cynical test that is designed to create the appearance of a robust additionality screen.

We assigned an Additionality score of 1 out of 3 for Gold Standard because we consider their protocol’s test ineffective. Gold Standard’s additionality test is ineffective because it adopts a 2007 additionality standard from the UNFCCC Clean Development Mechanism. The CDM additionality standard requires projects to conduct a highly subjective “barrier analysis” and discuss why their land use activities don’t constitute “common practice” without adequate quantitative or soil-specific guidance on any of those concepts; in some cases, a project-level “investment” or financial additionality test can also be required. As we pointed out in our original post, the academic literature indicates that these specific project-level CDM additionality standards have a problematic track record. Because the concepts are inherently subjective and depend on the judgment calls made by implementing actors, we believe they do not guarantee effective additionality outcomes. Accordingly, we concluded that Gold Standard’s additionality standards are ineffective, but did not mean to imply that they are transparently cynical.

Technically, the CDM standard is one of three options projects can use to establish additionality under the Gold Standard protocol; see Section 3.1.16 of the Gold Standard LUF Activity Requirements document. The two non-CDM pathways to establishing additionality can be described as “positive list” options, which act to strictly increase, rather than constrain, the universe of eligible projects. Because these two options are available as alternatives to the CDM standards, we marked the CDM standards’ “financial test,” “common practice,” and “barrier analysis” additionality tests as “allowed,” rather than “mandatory” in our detailed metrics for this protocol. Our view is that the CDM standards are inadequate, and therefore Gold Standard’s requirement for projects to either use the CDM standards or additional compliance options is also inadequate.

We presume that Gold Standard would disagree with our characterization that their protocol employs an ineffective additionality test — that is, one that, in our view, doesn’t guarantee the additionality of credits from projects that pass the test. We are confident in our conclusion and suggest interested readers review the primary documents to make up their own minds.

Gold Standard does not require soil carbon sampling

Mr. Hewlett also objected to our assertion that Gold Standard’s protocol does not require soil sampling. We believe his comments misrepresent his organization’s requirements:

The study also claims Gold Standard requires no sampling. We require sampling to 20% confidence. It didn't look at our carbon credit requirements. And so on.

To be clear, Gold Standard does not require that its credits reflect soil carbon changes as measured by soil sampling. The protocol allows projects to select one of three options for crediting carbon: Approach 1 (physical sampling), Approach 2 (calculations or modeling), or Approach 3 (default parameters); see Sections 5.1, 6.1, and 7.1 of the Soil Organic Carbon Framework Methodology. Only Approach 1 uses empirical soil sampling as the basis for carbon credits. When projects choose to employ sampling, a minimum statistical performance standard is required, as Mr. Hewlett suggests; see Section 9 of the Soil Organic Carbon Framework Methodology. But again, making sampling the basis for crediting is elective, not mandatory.

In our view, the standards for physical sampling under Approach 1 are robust. They are among the very best standards we encountered. They are also optional. As we wrote in our review:

Protocol allows for purely model-based crediting, but also permits empirical crediting at the discretion of users. Direct sampling is not required, but if sampling is applied, the methodology is robust. If modeling is applied, there is not enough guidance provided to ensure an appropriate modeling approach and the model used does not have to be publicly available. Not all GHG emission sources that may be affected by the project activity are included. This protocol stands out for having robust sampling requirements that are optional, and weak modeling requirements that can be used instead — potentially leading to either high or low rigor outcomes, depending on project choices.

We have since confirmed with Gold Standard staff that only projects adopting Approach 1 are required to use direct soil sampling to calculate the number of credits earned, with Approaches 2 and 3 relying on model and parameter-based calculations instead. Furthermore, we confirmed that, contrary to Mr. Hewlett’s assertion, Approaches 2 and 3 do not require soil sampling to validate the accuracy of their model- or calculation-based methods. Thus, our original interpretation that the protocol does not require soil sampling either as a basis for crediting or for model validation is correct, and Mr. Hewlett’s public comments are not.

We understand that Gold Standard is considering updates to the validation requirements for Approaches 2 and 3 that are expected early next year and that could require direct soil sampling. Gold Standard has also indicated that it has “reviewed and rejected” some model-based project proposals, and thus applies higher minimum standards than what is contained in the protocol itself. Nevertheless, we confirmed that the protocol does not require soil sampling for Approaches 2 and 3, neither as the basis for crediting nor to validate model-based calculations — exactly as we indicated in our original analysis.

Why we conducted a “desk evaluation” of protocols

A major theme of Mr. Hewlett’s criticism concerns CarbonPlan’s decision to conduct our analysis on the exclusive basis of public documents, without speaking to carbon offset registries and other companies that credit soil carbon offsets. He repeatedly suggests that we could have “picked up the phone” to confirm our findings prior to publication. (We did not.)

We conducted our analysis using documents instead of stakeholder interviews in order to replicate the situation facing anyone conducting a due diligence process on the buyer side of the market. Our efforts thus help illustrate the challenges of operating in this sector. Unlike private diligence exercises, however, we did our work in the open. This allows interested stakeholders the chance to respond and correct any errors.

In addition, we have found that pre-publication review with interested stakeholders can introduce bias into an analysis. Most protocols we reviewed allow projects to select from multiple options for satisfying protocol requirements, often with exemptions available at the sole discretion of the crediting organization. This situation makes it difficult to pin down what exactly is required, versus what is merely allowed or what a crediting organization would characterize as a typical outcome within their discretion. In our experience, a focus on the precise text of the protocol requirements is superior to interviews with organizations that have an active business interest in representing their approach in the most positive light.

Mr. Hewlett’s comments further reinforce the importance of focusing on protocol text for determining minimum quality outcomes. Gold Standard apparently told researchers from the Environmental Defense Fund that its Approaches 2 and 3 required direct soil sampling to validate model-based calculations. In Table A-2 of EDF’s recent report on soil carbon, for example, Gold Standard is listed as requiring either “direct” soil sampling (for Approach 1) or “hybrid” methods that include soil sampling to validate modeling (for Approaches 2 and 3; see footnote 3 on p. 37). In fact, however, the validation provisions of Section 16.2 of the Gold Standard protocol specifically do not require soil sampling and provide only for a verifier to “recommend” sampling. We confirmed this understanding in writing with Gold Standard, which indicated that it expects to complete a protocol update that addresses soil sampling requirements for project validation under Approaches 2 and 3 sometime early next year.

That said, the point of an arm’s-length analysis is not to establish that the worst-case possibilities are likely. We made clear that we were not reviewing the quality of any credited projects, nor any informal standards a registry might apply as a matter of discretion — but rather the minimum requirements a buyer could expect with confidence for any credited project without engaging in deeper diligence. Our conclusion is that protocol standards are generally not high in this sector, and that the many options and exemptions available to projects makes it hard to say what a buyer can expect from a given protocol. This does not speak to the quality of individual projects, which we noted repeatedly could exceed minimum protocol standards. But it does indicate that buyers need to conduct their own project-level diligence.

None of this is to excuse any errors or mistakes. We firmly believe that putting this analysis into the open is justified and worthwhile. We remain accountable for any errors or omissions. We thank Mr. Hewlett for correcting our oversight of Gold Standard’s environmental and social safeguards policies, as well as for the opportunity to clarify the description of our Additionality score. We also suggest that when it comes to the factual requirements of complex protocols, this exchange has highlighted the benefit of public research and debate around the state of soil carbon offset protocols.

Owen Hewlett’s full comments

For clarity and transparency, we are posting Mr. Hewlett’s full comments here.

Another day another new market entrant with a provocative headline on carbon credit ratings. Apparently Gold Standard doesn't require or is "transparently cynical" about additionality. I don't know man, if that was my finding I think I'd pick up the phone to check...


But they never do, do they? If you trust this study, Gold Standard has "No safeguards included as robust and rigorous requirements" in place; but safeguards/stakeholder requirements are not referenced as part of review...which is exactly where the safeguards and inclusivity requirements live. GS pioneered strong safeguards, as any experienced market researcher would know of course. Should have prompted a call to us you'd think...


The study also claims Gold Standard requires no sampling. We require sampling to 20% confidence. It didn't look at our carbon credit requirements. And so on.


These are all obviously gross oversights on core integrity issues.


Why am I posting this publicly? Because this stream of sponsored research run with no engagement about the actual requirements under review is dangerous. Gold Standard is fully available to consult on reviews of our requirements. I'm sure the other standards are too.


Three considerations for serious credit-raters that will hopefully emerge post-TSVCM:


1 - there are REAL questions to answer about soil carbon, like should it be even used for offsetting or Net Zero? If my research found that competent standards like ACR, Verra, Plan Vivo and GS didn't get it right I'd ask a question about the topic itself....


2 - On additionality: ADOPTION RATES DO NOT EQUAL ADDITIONALITY. That is, along with 'must be a new project', a one way ticket to positivity bias. I can only keep saying - if you see 'newness' or 'adoption rate' as a proxy for additionality then ask serious questions about the agenda...


3 - you can't rate standards without speaking to them. Carbon standards are large constructs that if you miss a page (or in this case, several entire documents) then you will come up with the wrong answers. They even disclaim "The Tillage Module must be read alongside four other documents to form a complete picture of this protocol." YES, IT'S INTENTIONAL (AND MORE THAN FOUR). It means we can cross refer and be consistent, not repeat safeguards in every methodology. Yet the headline 'rating' still manages to be given it absence of review of all relevant documents.


We all strive to make things simpler, but there is inevitable complexity that comes with rigour. There is a lesson to be learned by us and the researchers here, but it passed us all by because no-one picked up the phone.


There are some valid findings, including for our own standard. We will look at those seriously, as we always do, to identify opportunities for improvement. But please engage with us before you post incorrect findings, I promise our door is open.


Questions? Interested in collaborating on these problems?
email us
Aug 04 2021
EMAIL
hello@carbonplan.org
NEWSLETTER
Subscribe
CarbonPlan is a registered nonprofit public benefit corporation in California with 501(c)(3) status.
(c) 2024 CARBONPLAN
READ OUR TERMS
SCROLL: 0.00
c0d9786