Photo by Green Chameleon on Unsplash https://bit.ly/3cB3Smr

Amid debate about increasing the levels of job search compliance and threat of sanctioning those considered ‘job snobs’, it is time to engage in dialogue to bridge silos in research about the effectiveness of payment sanctions in employment services.

It is no secret that different research disciplines have different ways of assessing whether some kinds of public policy intervention work better than others. One example is the competing forms of knowledge about the effectiveness of sanctioning job seekers for not meeting requirements in their job plans.

Empirical normative researchers may apply econometric methods that use different evaluative criteria. One example is two recently published studies by Andrew Wright and co-authors that used ‘a propensity score matching approach’ on administrative data and a natural experiment (a blog post of the study is available here). Wright and co-authors applied statistical validation techniques to a large data set to assess whether increasing the level of one to two days’ financial penalty had more or less likelihood of preventing a follow-up penalty for another non-compliance event. They also tested whether payment suspensions increased the likelihood of compliance. They found that where the job seeker had experienced a payment penalty, the likelihood of attending subsequent appointments improved.

The authors of this blog apply two different approaches. As a critical researcher, the evaluative criteria used by Simone Casey have focused on power relations and the subjective effects of sanctions. David O’Halloran’s research approached the issue from a psycho-social perspective, to examine the reasons why unemployed workers miss employment services appointments, and what it feels like to miss out on good services. Our studies question the effectiveness of policy in terms of their well-being effects and their ethics, rather than objectives such as increasing attendance rates at appointments.

The point of departure in the conclusions reached in these different studies with different aims and methodologies is about what we are trying to measure, what we should be trying to measure, and ultimately what is regarded as valid evidence from a policy-making point of view.

Seeing the forest and the trees

We argue that Wright and co-authors’ reliance on quantitative data, while valid, needs to be tempered with qualitative data and analysis of the subjective experiences of programs to ensure we see the “forest and the trees”.

The correlation found between an improvement in appointment attendance after a sanction does not explore whether there is any change in psychological commitment for the target behaviour. It simply shows that people are acting to avoid a further sanction. Any other interpretation of the results relies on an assumed or inferred link between the two phenomena.

The policy of sanctions, as a part of the broader policy of so-called Mutual Obligation, is supposed to lead to better public policy outcomes such as helping people get jobs or improving their employability. This is not demonstrated by the results of the Wright and co-authors studies – which acknowledges that the evaluation was about whether the sanctions get people to comply with turning up for an appointment, rather than actually getting a job or being more employable.

Our research shows that while the policy makers and policy subjects might be on the same page when it comes to the intention of mutual obligation, they disagree that an appointment with an employment service provider is an effective way of getting a job. The policy subjects – those looking for jobs – find the experience of an appointment counter-productive and sometimes harmful.

The policy response of increasing sanctions for non-attendance may have increased attendance but it may have missed the point of why so many people are not attending in the first place. Conflating assumptions about attitudes to work with behaviours that are in response to poor quality services will potentially result in policy failure. Our current research for example is showing there are six indicators of quality that are important to service users, and which are quite different to those currently used to assess employment services. These indicators consider if the service is: Useful and competent; Client-centred; Responsive to feedback; Fair; Trustworthy; and Friendly.

Wright and co-authors argue that sanctions such as payment suspensions do not cause harm but they provide no evidence for this from the perspective of anyone experiencing such sanctions. In doing so, they ignore a body of welfare conditionality literature that has provided clear and insightful analysis into the potential for harm and social alienation arising from the use of sanctions and payment suspensions.

The authors of a book about welfare conditionality note that “there can be no single answer to the question ‘does conditionality work?’, either in theory (as so much depends on one’s objectives, context and the specific mechanisms used) or in practice (given the highly variable findings of available empirical evidence)”. In the context of research into behavioural change, the authors concluded that there were serious reasons to challenge the rational actor premise of behavioural economics and to understand rationality as potentially ‘thin’ or ‘socially embedded’, particularly as exercised by marginalised or disadvantaged groups.

Wright and co-authors state that sanctions serve a political purpose, to buttress public support for unemployment payments. This fits with the dominant discourse surrounding the unemployed in Australia, which has increasingly moved to a view that they are lazy, don’t do enough to find jobs, and should be under more obligation to find work. This has in turn been embedded into the design of ‘work first’ unemployment policy. This ignores evidence that unemployed people in Australia say they want to work. It also ignores the fact that being unemployed and receiving welfare payments are two separate things. Australia’s unemployment policy is almost entirely focussed on the payment system, with little active policy in supporting education, training or creating jobs.

Bridging silos in public policy research

We argue that there is potential for collaboration to produce research evidence in the future to examine the full range of effects. Participatory policy development and interpretive paradigms have long established their credibility in public policy. These approaches are used to understanding the subjective impact of policy on human subjects. They are driven by humanist and emancipatory goals rather than economic or cost-benefit frameworks and aim to bring forms of social suffering to the attention those who care. There is a need for more critical insight into policy making and the frameworks that are used to evaluate program effectiveness using other forms of evidence.

This suggests the need for stronger linkages between critical and social researchers and policy makers so that new formulas for the investment of public resources can be trialled in conditions where the evaluation includes qualitative measures of well-being, self-efficacy, motivation and social capital.

It is important that all these questions receive attention as we progress scholarly research into welfare conditionality tools like the JobSeeker compliance system. They are particularly relevant to evaluation of the Targeted Compliance Framework’s digital interfaces and issues that may emerge from the imminent rollout of digital employment services.

We hope to open a dialogue on public policy inquiry between critical, psycho-social, and empirical researchers in future.

This article has 1 comment

  1. For clarity, we (Wright and co-authors) did not claim that sanctions do not result in any harm. In contrast, we explicitly cited evidence that inappropriately targeted sanction policies can lead to significant adverse consequences. We did claim that payment suspensions result in no lasting financial impact, and no financial impact at all in 86% of cases. But we provide evidence for that, and explain why it is the case. We then analyse the effects of these ‘zero dollar’ sanctions.

    The authors of this blog post may also be aware that there is a lot of existing qualitative research in the area (for example four recent parliamentary inquiries, and significant content from welfare advocates). However, there is a relative lack of quantitative evidence on the effects of Australian sanction policies.

Leave a comment

Your email address will not be published. Required fields are marked *

*