Evaluating incentives to determine effectiveness is a basic Smart Incentives principle, but it’s not easy. Guest blogger Catherine Searle Renault provides her take on how to avoid bad evaluations that can destroy good incentive programs.

 

Who can argue with the importance of understanding whether or not taxpayer dollars are being used effectively to meet agreed upon policy goals like economic growth? Across the country, the concept of regularly evaluating economic development incentives, including those implemented as tax credits, is broadly accepted.

The devil, however, is in the details. Unfortunately there are good evaluations, ones that follow recognized policy evaluation methodologies and principles, and bad evaluations, where conclusions are written before a single piece of data is collected.

In Oklahoma and Maryland, two states with vastly different political landscapes, the statutory evaluation is in the hands of evaluation professionals in the economic development agencies, rather than being delegated to a watchdog or audit organization, as is proposed in some states. This ensures that the evaluations are credible and professionally done, and actually answer the questions that the legislatures and the public have.

Evaluations are substantially different from audits in that they tend to focus on policy outcomes rather than on financial management and implementation. Thus, the distinction is important, because the question legislators ask is whether the economic development incentives and tax credits are meeting their policy objectives, not whether or not they are managed correctly. This is especially relevant because many of the incentives are simply part of the tax code and are not actively managed by anyone.

Regardless of who gets the responsibility, getting answers to questions about the effectiveness of economic development programs requires more than just having an evaluation in statute.

First, there is the problem that many of the incentives and tax credits were put into law without a clearly stated goal. It is impossible to tell if a program is meeting its policy goals and objectives if these are not spelled out. The Legislature should ensure that all existing and future incentives articulate the reasoning behind each program, with more specificity than just “create jobs.”

Second, the Legislature should also ensure than evaluation is built into the enabling bills. Each new incentive should include a section that inserts the program into the biannual evaluation, and appropriations should cover the cost of the evaluation. An unfunded evaluation is frustrating to both agencies and lawmakers alike. But good evaluations are not free, nor are they cheap.

Third, the Legislature should ensure that the reports they receive reflect good research design, and to the maximum extent possible, show a causal relationship between the program and the outcomes observed. Just because two events coincide doesn’t mean that one caused the other. We should compare companies that have received economic development incentives with those who haven’t, matching them for industry, stage of development, location and other relevant attributes, and try to observe any differences in job creation, investment or community development.

Fourth, the Legislature needs to recognize the difficulty of balancing the need to get appropriate data, while minimizing reporting requirements on recipients of incentives and credits. Agencies that “own” data, including revenue and labor, should be required to share it with the designated researchers, while all involved must maintain strict data confidentiality.

Also, the Legislature needs to understand that evaluation is a process, not an event. Annual or bi-annual looks at programs need to be seem in the economic context – even the best programs cannot overcome downturns like we saw in 2008. When programs are reviewed on a regular basis, we will learn more than from random, isolated snapshots.

Finally, evaluations should be best viewed as an opportunity to improve programs, rather than a “gotcha” exercise with alternative uses for the expenditures already planned before the ink is dry on the reports. A presumption that all economic development programs are wasteful only strengthens the resolve of program managers and recipients alike to avoid evaluations altogether, rather than risk having good programs destroyed by bad evaluations.

 

Dr. Catherine Searle Renault is Principal and Owner of Innovation Policyworks LLC, and Research Fellow at the Center for Regional Economic Competitiveness, where she specializes in evaluation research.