Rising Voices (RV) is partnering with the Association for Progressive Communications (APC) which produced the 2019 Global Information Society Watch (GISWatch) focusing on Artificial Intelligence (AI): Human Rights, Social Justice and Development. Over the next several months, RV will be republishing versions of the country reports, especially those reports that are highlighting how AI may affect historically underrepresented or marginalized communities.
This post was written by Hannah Thinyane and Monticha Puthawong of United Nations University Institute on Computing and Society (UNU-CS), Macau and Thailand. This report was originally published as part of a larger compilation: “Global Information Society Watch 2019: Artificial intelligence: Human rights, social justice and development”. Please visit the GISWatch website for the full report which is also available under a CC BY 4.0 license.
Forced labour and human trafficking affect more than 24.9 million men, women and children globally who are exploited for their labour or forced into prostitution. Figures released by the US State Department indicate that in 2018, only 85,613 victims were identified worldwide. These figures illustrate that there are a large number of people, often migrant workers, who are exploited in slavery-like conditions, yet only a small fraction are being successfully identified and subsequently helped.
The terms “human trafficking” and “forced labour” are often used interchangeably by popular media and practitioners, so it would pay to define each now. In our work, we draw on Skȓivánková’s continuum of exploitation defines “decent work” and “forced labour” as two ends of a continuum, with any situation between the two end points representing different forms of labour exploitation. These work situations can range from “cooperative, consensual, mutually beneficial relationships between migrants and their facilitators” to “highly coercive and exploitative”. Using this continuum, we can see human trafficking as a process, consisting of a series of exploitative acts that move a worker towards a situation of forced labour. In this report, we use the term “frontline responder” (FLR) to collectively refer to the broad range of stakeholders whose role it is to assess working conditions and to help potential victims access help or remediation channels – including police, labour inspectors, auditors and non-governmental organisations (NGOs).
In this report we draw from the findings of a two-and-a-half-year project aimed at understanding how digital technology can be used to support exploited workers in vulnerable situations. It starts by describing the development process that we have undertaken in Thailand to create Apprise, a system to support proactive and consistent screening of workers in vulnerable situations. The report then frames the potential of using artificial intelligence (AI) to support an understanding of changing practices of exploitation.
Apprise
Our work takes a value sensitive design (VSD) approach which is based on the understanding that technology is shaped by the biases and assumptions of its designers and creators. VSD proactively integrates ethical reflection in the design of solutions, using an integrative and iterative tripartite methodology comprised of conceptual, empirical and technical investigations. With its value focus, this self-reflexive approach seeks to be “proactive [in order] to influence the design of technology early in and through the design process.” In doing so, VSD shows a commitment to progress, not perfection.
We began our work in Thailand early in 2017 with a series of focus groups with a broad range of stakeholders, including survivors of exploitation, NGOs, Thai government officials and intergovernmental organisations. These focus groups aimed to understand current practices and problems in identifying victims of human trafficking; their access to and use of technology; and their perception on the ways technology could support them to overcome the problems that they face. To summarise the findings of this initial consultation, focus groups suggested that support was needed during the initial screening phase of victim identification. The core problems that were identified at this stage were:
- Communication: Due to a lack of resources (and knowledge of languages that would be required), FLRs commonly faced problems of being unable to speak the same language as workers and were therefore unable to interview them. Translators were also not always available.
- Privacy: Initial screening occurs in the field and sometimes in front of potential exploiters. Workers fear retribution if they answer questions honestly.
- Training: There is a lack of understanding of the common indicators of labour exploitation and forced labour, with some FLRs focusing on physical indications of abuse, rather than the more subtle forms of coercion such as debt bondage and the withholding of wages and important documents.
Based on these findings, we developed Apprise, a mobile-based expert system to support FLRs to proactively and consistently screen vulnerable populations for indications of labour exploitation. The tool is installed on the FLR’s phone, but ultimately it serves to allow workers to privately disclose their working conditions. Questions are translated and recorded in languages that are common among workers in each sector, and when combined with a set of headphones, this provides workers with a private way to answer while in the field. By analysing their responses to a series of yes/no questions, Apprise provides advice on next steps that the FLR should take to support the worker. Responses to questions are stored on the FLR’s phone and uploaded to a server when they next log in with network reception, to support post-hoc analysis. As well as co-designing the system itself, our consultations with participants uncovered the current indications of exploitation to inform the lists of questions asked. From April 2017 to June 2019, over 1,000 stakeholders in the anti-trafficking field in Thailand contributed to the design or evaluation of the system.
Since March 2018, NGOs have been using Apprise in the field to support proactive and consistent screening in their outreach activities in the following sectors: fishing, seafood processing, manufacturing and sexual exploitation. In May 2018 we started to work closely with the Ministry of Labour (specifically the Department of Labour Protection and Welfare) and Royal Thai Navy in Thailand to understand how Apprise could support proactive and consistent worker screening at government inspection centres at ports (Port-in/Port-out or “PIPO” Inspection Centres) and at sea.
Through this process of working on the ground with FLRs, we have noticed that exploiters continually tweak and refine their own practices of exploitation, in response to changing policies and practices of inspections. When exploiters change their practices, it takes time for these changes to be recognised as a new “pattern” of exploitation. Information is often siloed by different stakeholders, and not shared for a wide variety of different reasons. After some time, stakeholders do begin sharing these changing patterns, often through their informal networks. At this point, the new practice is identified as a pattern and a new policy or practice of inspection is developed.
This game of cat and mouse continues over time, with exploiters again tweaking their behaviour to avoid detection. In response to this, we developed Apprise to allow new questions to be added to question lists, as well as new languages to be supported on the fly. When an FLR logs in to their phone, Apprise checks for any updates to lists and downloads new audio translations of questions. This adaptive support allows FLRs to question on current patterns of exploitation, obtaining further information on exploitative practices once a new pattern has been identified.
Based on this observation, we began to ask ourselves if there was a role for machine learning to support a more timely and more accurate identification of these changing practices of exploitation. While this work is still in its nascent stages, our aim is to determine sector-specific practices of exploitation in order to create targeted education and awareness-raising campaigns; support FLRs to proactively screen against current practices of exploitation; and inform evidence-based policy to support the prosecution of exploiters.
Machine learning to detect patterns of exploitation
At its broadest, machine learning works by identifying patterns in existing data. Its main goal is to be able to generalise, so that the patterns identified in training data can be accurately applied to unseen data. Machine learning has been applied in a wide number of criminal justice contexts, including predicting crimes, predicting offenders, predicting perpetrator identities and predicting crime victims. It has also been used in the anti-trafficking field for predictive vulnerability assessments and crime mapping in order to improve government resource allocation. In our work, we aim to understand if there is a role for machine learning to predict changing patterns of exploitation, an area that currently has received little focus.
While there are obvious benefits that accurate forecasting tools could bring, governments, civil society and academics have not always spoken so favourably about these tools, citing cases where they “can reproduce existing patterns of discrimination, inherit the prejudice of prior decision makers or simply reflect the widespread biases that persist in society. [They] can even have the perverse result of exacerbating existing inequalities by suggesting that historically disadvantaged groups actually deserve less favourable treatment.”
While recognising different notions of human rights (moral, ethical and philosophical), our work takes a legal approach, based on the Universal Declaration of Human Rights (UDHR), the United Nations Guiding Principles on Business and Human Rights and the International Labour Organization (ILO) Declaration on Fundamental Principles and Rights at Work. These international legal instruments provide an established framework for “considering, evaluating and ultimately redressing the impacts of artificial intelligence on individuals and society.”
In order to analyse the human rights impact of machine learning on identifying changing practices of exploitation, we note that an important first point of consideration is the quality of data that is provided in initial screening interviews using Apprise, an issue closely linked to privacy.
Significant attention was paid in the design phase of Apprise to include strict limitations on how much data is collected from individual workers, and also who can access screening responses (and what access they have). As an example, Apprise aims to support accountability and transparency by automatically sharing a summarised version of screening responses with the FLR’s immediate supervisor. However, this process limits the accuracy of GPS locations of screening sessions, and only shares responses to the yes/no questions.
To support the privacy of workers, we do not collect any personally identifiable information, as we believe that the risks associated with this would unfairly disadvantage those who chose to answer questions. However, there is no way to delete a particular individual’s responses later (should they be able to request this).
Over the past year and a half, we have evaluated and refined Apprise based on feedback from workers in vulnerable sectors as well as survivors of trafficking. The aim of this has been to increase the privacy that workers feel in these initial screening sessions. We note that while no screening system can guarantee truthful responses from workers, Apprise provides more privacy than current methods of interviewing workers, which often occur in groups and in front of potential exploiters (and in the worst cases, using supervisors as translators when language barriers occur).
Within a machine learning system, interview responses would obviously need to be shared further, which requires special consideration. The new patterns of exploitation themselves are intended to be shared with other FLRs, to inform initial screening of workers in vulnerable situations. However, care must be paid as to who else has access to them. As soon as exploiters realise that their patterns of exploitation have been identified, they are likely to adapt them more quickly.
In the cases where responses are accurate, and the tool is able to identify new practices of exploitation, there are obvious implications on the rights of exploited workers: the right to freedom from slavery (UDHR Article 4); the right to freedom from torture and degrading treatment (UDHR Article 5); the right to desirable work (UDHR Article 23); the right to rest and leisure (UDHR Article 24); the right to an adequate standard of living (UDHR Article 25); and freedom from state or personal interference in the above rights (UDHR Article 30). An important note is that while the system takes input from a subset of workers (those who have been interviewed), there is potential to impact the working conditions of many more.
Like any system, Apprise may misidentify patterns, resulting in attention being paid in the wrong direction. While this represents an inefficient use of resources (FLR and worker time), it does not have any significant implications on the rights of workers. This input would be used to inform investigations, which themselves would disprove the prediction.
Conclusion
Machine learning has been applied in a wide number of criminal justice contexts. In our work, we aim to understand if there is a role for machine learning to predict changing patterns of exploitation, an area that currently has received little focus.
In this report we describe work that we are undertaking to proactively and consistently screen workers in vulnerable situations for signs of labour exploitation and forced labour. The report introduces Apprise, an expert system that we have developed and that FLRs are currently using in Thailand to support the initial screening stage of victim identification. The report also discusses the potential use of machine learning to draw on the responses to the screening interviews and to predict changing patterns of exploitation. We reflect on this proposed system, to understand the human rights implications that this new technology would include. While there is an obvious implication on workers’ right to privacy, we describe steps taken to minimise this imposition. We also advocate the use of the system to support the fundamental human rights of workers who are currently trapped in exploitative work situations.