Since its launch in late 2022, the ChatGPT artificial intelligence program has attracted admiration for its technological advances but has also raised fears about its future impacts. One issue that has received less attention is the impact of such programs on the hundreds of thousands of low-wage workers, who are essential to make artificial intelligence systems like ChatGPT work. These workers are subcontracted by large technology companies, often in poorer countries, to “train” AI systems. They perform tedious and potentially harmful tasks, such as labeling millions of data and images to teach the AI to act. Without these workers, the AI system would not exist.
This “hidden workforce,” as the Partnership on AI (PAI), a non-profit organization, calls it, is composed of people who are often low-paid and who face precarious working conditions. They are responsible for “enriching” the data that the AI uses to learn. In the case of ChatGPT, for example, when a user asks a question, the program uses approximately 175 billion parameters or variables to decide what to respond with. It relies on references “taught” by human beings to distinguish between different types of content.
The workers responsible for this task are called “data taggers,” and they work for companies that specialize in data enrichment, like the American firm DignifAI. Despite the essential role these workers play in AI development, data enrichment is the weakest link in the technology industry’s supply chain. The PAI has recognized the importance of these workers and the poor working conditions they face.

According to a Time magazine investigation, many of the “data taggers” subcontracted by OpenAI, the company that created ChatGPT, earned between $1.32 and $2 per hour. OpenAI outsourced the work to the San Francisco-based company Sama, which then hired workers in Kenya to carry out the work.
A spokesperson for OpenAI said to Time that the subcontractor was responsible for managing the wages and working conditions of the data taggers working on ChatGPT. However, the investigation has revealed the precarious working conditions faced by many of these workers, who often earn less than $2 per hour.
The impact of AI on low-wage workers has raised concerns about the future of work in the digital age. As automation becomes more prevalent, many jobs that currently employ low-wage workers may be at risk. However, the growth of the AI industry also creates new opportunities for workers with technical skills, and there is a need for policies that support these workers.
As the AI industry continues to grow, there is a need for greater transparency and accountability around the working conditions of those who are responsible for training these systems. The development of AI has the potential to bring significant benefits to society, but it is essential to ensure that these benefits are shared fairly and that the rights of workers are protected.
Sama Firm

OpenAI, the San Francisco-based artificial intelligence research laboratory, contracted Kenyan firm Sama to label textual descriptions of sexual abuse, hate speech, and violence. Three contracts worth $200,000 in total were signed, and each team focused on one subject. Three dozen workers were split into three teams, each comprising junior data labelers called agents, who were paid $1.32 per hour after tax, rising to $1.44 per hour if they met their targets, with the more senior quality analysts paid up to $2 per hour. The contracts specified that OpenAI would pay $12.50 per hour to Sama for the work. The Kenyan workers said they were expected to label 150-250 passages of text per nine-hour shift, ranging from around 100 words to over 1,000, and all four employees interviewed by TIME spoke of being mentally scarred by their work. Though entitled to attend sessions with wellness counselors, the workers found these sessions unhelpful and rare. Two said they were only offered group sessions, while one was repeatedly denied the opportunity to see a counselor on a one-to-one basis by Sama management.
In a statement, a Sama spokesperson denied that employees only had access to group sessions, saying that employees were entitled to both individual and group sessions with professionally trained and licensed mental health therapists who were accessible at any time. The spokesperson added that the $12.50 hourly rate covered all costs, including infrastructure expenses, salary and benefits for associates, and dedicated quality assurance analysts and team leaders.
An OpenAI spokesperson said the company did not issue any productivity targets and that Sama was responsible for managing payment and mental health provisions for its employees. The spokesperson said OpenAI took the mental health of its employees and contractors seriously, and that workers could opt out of any work without penalisation, exposure to explicit content would be limited, and sensitive information would be handled by workers specifically trained for the purpose.
The task of data labeling sometimes presented problems with nuance. A Sama employee tasked with labeling an explicit story about Batman’s sidekick Robin being raped in a villain’s lair found the text difficult to categorize. While the story indicated that the sex was nonconsensual, later passages indicated Robin began to reciprocate. The employee was unsure whether to label the passage as sexual violence and requested OpenAI clarification. OpenAI’s response is not recorded, and the Sama employee did not respond to a request for an interview.
In February 2022, Sama and OpenAI began work on a separate project: collecting sexual and violent images that were sometimes illegal under US law. In a statement, OpenAI said the labeling of harmful images was necessary to make its AI tools safer, adding that it also builds image-generation technology. The work on this project was unrelated to the ChatGPT work. After a month, Sama ended its relationship with OpenAI, although the reasons are unclear.
calculate of squere of circle with 15cm radie