Evaluations should be part of a dialogue with policymakers and other stakeholders to improve programs and achieve shared economic development goals. To this end – and building on previous exchanges among state economic development leaders on ways to improve incentive program impact and evaluation – the Pew Charitable Trusts recently hosted an Evaluators Roundtable for state executive and legislative branch agencies tasked with conducting tax credit and incentive evaluations. 

The Roundtable covered the following themes:

Identifying goals, choosing metrics and accessing data

A clear goal or performance statement is the foundation of good evaluations, not to mention effective program management. A surprising number of tax credit and other incentive programs have vague objectives that can’t be measured or assessed. Logic models that make explicit the steps that occur between the policy and the hoped for outcome can help identify appropriate evaluation measures.

The selected measures by which programs are evaluated also need to meet some standards in order to be useful. Identifying sources for the data, establishing a baseline for measurement, and verifying company-provided data are all important elements of a successful evaluation process.

Accessing state administrative records, particularly for tax credit evaluations, is necessary but remains a challenge in many states. Balancing confidentiality and data security issues with the need to understand the actual cost and utility of a plethora of tax credits requires a partnership built on trust among the state agencies asked to share data for evaluation purposes. Participants and speakers discussed creating formal MOUs among agencies, providing data sharing requests that are focused on obtaining only the information that’s needed rather than enabling blanket access, ensuring data users are able to maintain the data securely, asking or requiring companies receiving credits to allow data sharing, and being very specific about what is and is now allowed by law – rather than assuming based on habit. 

Concepts and tools for measuring impact

In addition to the usual topics of what and when to measure, the group spent time thinking through several topics that make evaluations of incentive program effectiveness especially difficult. These include the “but for” question regarding the extent to which the tax credit or incentive influenced behavior, displacement effects on other businesses that did not receive incentives or credits, opportunity costs within government, and the effects on current population relative to new residents.

My take is that the group’s consensus around these topics is that they need to be considered more explicitly in evaluations, and sensitivity analyses or ranges of impact estimates are useful but we should be careful about presenting point calculations of these different effects.

Using evaluations to inform policy discussions

Given all of the above, good evaluations are not easy reads and don’t usually have simple conclusions. That said, evaluations need to be conveyed to policymakers in a digestible format to be worthwhile since the whole purpose of the exercise is to improve policies to enhance effectiveness.

Key concepts to making evaluation information more accessible include providing the same data in multiple formats for different audiences (the full report, one-pagers, even tweets), using graphics to convey the message, providing ranges for findings rather than single numbers, and offering a clear conclusion.

Another takeaway is that the more often we talk about incentives and evaluations, the better the conversation becomes. There is a learning curve for everyone, and it takes time to build common understanding around the concepts. The good news is that is happening in many states.