Arab Canada News
News
Published: August 17, 2023
Mofat Okine, 27 years old, says that working in artificial intelligence shocked him.
Kenyans described the trauma and psychological damage they suffered to ensure the safety of AI technology.
Artificial intelligence (AI) is sweeping across the globe, and to make powerful chatbots like ChatGPT safe, people are employed to ensure that AI can recognize extremist content and ensure that it is not shown to users.
It is a rapidly growing industry worth billions of dollars that employs thousands of people in low-income areas such as Africa, India, and the Philippines, in addition to countries at the forefront of technological innovations such as China and the United States.
But the process of creating these safety filters can negatively affect those who view violent materials and images.
The Kenyan worked as a data monitor tasked with monitoring and reporting extremist content.
The 27-year-old said that the trauma he experienced changed him and cost him his marriage and friendships, leaving him suffering from depression. Other colleagues of his said they were experiencing post-traumatic stress disorder.
Mofat worked for a company called "Sama," which was contracted by the firm behind the ChatGPT technology.
He and his colleagues were tasked with sorting extremist materials to prevent the chatbot from accessing and thereby presenting them to users - Mofat’s role was to verify all flagged materials and ensure their accuracy.
Mofat joined other companies in data classification to petition the Kenyan Parliament to investigate the employment conditions in Kenyan tech companies that are used by large foreign tech firms for content moderation and other AI-related tasks.
He says they did not receive adequate training to deal with the extremist written content they encountered and were not provided with sufficient professional counseling support - allegations that Sama strongly denies.
Psychologist and PTSD specialist Dr. Veronica Njitcho explained that exposure to extremist content online is known as secondary trauma and can have prolonged effects similar to the primary trauma experienced by victims of abuse.
She said: "When secondary trauma is severely damaging, it’s due to the feeling of helplessness regarding it."
"Because you are viewing the content, there is no next step, no one to report it to, and you can’t do anything about it."
"And because it concerns content moderation, you can't predict what you will see, and you don't know how it might escalate."
Njitcho said that symptoms of secondary trauma can include having nightmares, avoiding interactions with people, lack of empathy, and anxiety or stress when watching anything that reminds them of what they read or saw.
Mofat's former employer stated that psychological and social support was provided throughout their employment.
Dr. Njitcho said it is essential for companies to provide the necessary support "to ensure that it does not impact their performance and wellbeing."
In May, more than 150 African content moderators providing services for AI tools used by several major tech companies voted to establish the first union for content moderators.
This type of recognition and understanding is what Mofat wants from big tech companies and billions of users.
"People should know that the individuals who made their platform safe are content moderators because people don't even know if this group of individuals exists."
Sama denied all complaints and stated that all job applicants underwent a "mental resilience" test and were shown examples of the content they would deal with, as well as that before starting the project, employees were required to review and sign a consent form before joining the company, meaning that the company highlighted the nature of the content that might include violent images.
A company spokesperson said: "For those successful colleagues selected for the project, psychological and social support was provided throughout their employment."
"Sama employs professional health specialists who provide psychological counseling services around the clock, either onsite or through the company’s health insurance plan."
At present, Mofat waits to find out if the petition will be heard in Parliament. While trying to work on his mental health, he comforts himself by working as a data classifier, which he believes will help him recover.
He says: "I am very proud. I feel like a soldier. Now using ChatGPT is safe for everyone because we are the ones taking the bullets on their behalf."
```
Comments