Photo by Fábio Lucas on Unsplash https://bit.ly/3cRFVbP

We stand on the cusp of an artificially intelligent tax administration world. The transformation of tax authorities to this near-future state appears inevitable and unstoppable. The tax authorities in Australia, Canada, the United Kingdom and the United States are all well advanced down this digitisation and automation path, driven by the dual motivators of the potential for technology to raise revenue collection rates, and efficiently and effectively meet the instantaneous service demands of taxpayers.

My Paper recently published in the eJournal of Tax Research, more fully details the specific digitisation efforts of the various tax authorities in each of these jurisdictions. I argue that these efforts, particularly higher level uses of artificial intelligence (‘AI’), raise significant challenges to the traditional approaches to delineating the boundary of tax authority susceptibility to taxpayer suit.

The current boundaries of tax authority immunity

All taxing nations to some degree are understandably concerned to prevent exposing the revenue authority to liability which would have undue economic impacts. Although jurisdiction-specific rules vary significantly, my analysis reveals jurisdictional commonalities in terms of the public policy underpinnings of the various measures and approaches courts use to delineate the boundaries of tax authority exposure to taxpayer suit.

In particular, justiciability concerns in their many guises are apparent across all jurisdictions. There is also a common concern to obviate the risk of opening the floodgates to potentially large and indeterminate liability to taxpayers. There is also evidence of concern to avoid potentially triggering over-defensive responses among tax officials which might have chilling effects on the proper fulfillment of their duties.

My core contention is that digitisation will significantly affect each of these three common public policy underpinnings of tax authority immunity from suit. Below is just a sample of the many potential issues identified in my paper. Further issues will also undoubtedly emerge as technology and its applications continue to evolve.

Justiciability

Courts have developed numerous tests to assist in determining justiciability of various types of taxpayer claims against tax officials. Many of these tests limit justiciability to matters which are administrative or operational in nature, whilst maintaining immunity for discretionary or policy decision-making. Such tests need re-examination in a digitised tax administration environment.

For example, two of the more straightforward indicators of whether a matter is fundamentally operational or discretionary – the volume of the activity and the level of authority of the person responsible for carrying out the activity – will be less reliable in a digitised environment. High-volume, repetitive mechanical tasks carried out by low-level human employees would typically be considered operational in nature (and, therefore, justiciable). In contrast, non-justiciable discretionary matters have traditionally been considered the domain of high-level officials engaged in relatively complex, low volume policy-setting tasks. However, AI machines defy classification as high-level employees, and yet are potentially capable of rapidly dealing with large volumes of high-level discretionary decision-making tasks. This will pose new judicial challenges.

Justiciability considerations based on the compatibility or otherwise of public and private law duties of tax officials will also need to be re-examined in a digitised environment. Tax authorities, in opposing taxpayer claims, have often argued that private sector duties and standards should not be applied to tax officials. These arguments have traditionally relied on the fact that the size and scale of public tax administration activities makes it difficult if not impossible to carry out the sorts of checks and balances which are feasible in otherwise similar private sector settings.

Digitisation significantly erodes the persuasive force of such arguments. AI technology enables large masses of information to be processed in a fraction of the time presently taken by humans. Hence in an AI-enhanced tax administration environment, although the scale and volume of tasks facing tax administrators remains massive, the dramatically increased capacity to deal with that scale and volume should largely negate any justification for special protection of tax authorities from exposure to suit on that basis.

Chilling effects

Digitisation should also change the current judicial approach to determining potential chilling effects of taxpayer claims against tax authorities. In this context, ‘chilling effects’ refers to adverse consequences of over-defensive responses by tax officials to adverse judicial findings against them (such as avoiding giving advice to taxpayers or adopting overly-lenient interpretations of tax laws) . Unlike humans, AI machines have no direct awareness that there has been an adverse judicial outcome to which they would be capable of responding over-defensively. Extrapolating from this simple proposition, public policy concerns about imposing liability on tax officials due to potential chilling effects should be significantly reduced in a digital tax administration environment.

Of course, AI machines do not operate in isolation from humans. Human associates may respond overly-defensively to adverse judicial determinations against AI tax machines. These over-defensive responses might manifest in ways that could have more far reaching implications than in non-digitised setting. Over-defensiveness might become entrenched via overly-cautious revisions to AI algorithms. This may affect a far greater volume of transactions than transient changes in the behaviours of human officials. Further, these effects might not be appreciated for some time – perhaps only after significant revenue losses have accrued. Hence, judges may need to be especially vigilant in guarding against potential chilling effects of imposing liability on AI tax officials.

Another major factor considered by judges in determining the proper weight to be afforded to potential chilling effects of imposing liability on tax officials are the potential countervailing benefits of imposing liability – in the form of consequent improvements in public administration. However, such benefits will likely be dulled in a world of AI tax machines. Much has been written about the difficulties of ‘re-educating’ AI offenders, and compensation orders against machines are ineffectual. In addition, determining whether to impose liability on a human actor (such as a programmer) for the loss caused by an intelligent machine may not be possible or appropriate in many cases. This is because artificially intelligent machines are, by definition, capable of autonomous ‘learning’ beyond simply implementing pre-programmed instructions.

The European Union Committee on Legal Affairs has recommended that although at present, responsibility must be sheeted back to a human actor, in future ‘…liability should be proportional to the actual level of instructions given to the robot and of its degree of autonomy…’ As such, future assessments of whether and to what extent chill factor concerns should prevail in assessing taxpayer claims will likely need to be revisited – perhaps multiple times – as intelligent technologies continue to evolve.

Government solvency, floodgates and indeterminacy

The transition to AI-based tax administration also raises challenges to the future relevance and applicability of government solvency concerns. However, exactly how these concerns will manifest in future will, in part, depend on the accuracy of future digital technologies and the frequency and nature of any failures of that technology.

It is generally presumed that artificial intelligence will improve accuracy. However, tax authority claims about improvements in accuracy brought about by digitisation are yet to fully materialise. In my paper, I cite numerous examples of high-profile and costly failures – including the UK Check Employment Status for Tax (‘CEST’) tool (read more here) and the Australian Online Compliance Intervention Program (‘Robodebt’) scandal (read more here). In addition, the United States Taxpayer Advocate has found that as tax officials become more accustomed to relying on computer programs to make decisions, they become less capable of detecting errors in those programs.

This means tax authorities may not take adequate steps to mitigate against potential floodgates consequences of errors arising from the adoption of various AI methods and tools. They simply may not appreciate that the errors potentially triggering those floodgate consequences even exist until they manifest in potentially significant harm to taxpayers. In this event, tax authorities may increasingly need to rely upon floodgates policy concerns to insulate the revenue authority from attack.

Conclusion

Viewed collectively, the issues I identify in my paper confirm that digitisation of tax administration challenges the fundamental public policy underpinnings of the current tax authority legal immunity settings. Tax administrators should work with the judiciary, policymakers and other experts to identify and address these challenges before they manifest in potentially undesirable or unintended levels of exposure of tax officials or tax authorities to taxpayer claims.

Comments are closed.