em360tech image

AI has the potential to have a revolutionary impact on our lives with millions of use cases that can have tremendous benefits to society. However, serious ethical questions and considerations are not being adequately addressed. An area that continues to be the elephant in the room for the industry is the exploitative working conditions and practices that are increasingly prevalent in the world of data labelling and business process outsourcing firms and are performing a key role in the rapid growth in AI and Machine Learning. 

While AI has only recently landed into the wider cultural and public zeitgeist of the day, its development especially over the past decade has created a steady and ultimately insatiable demand for data. However, it wasn’t just raw data that the development of AI was and still is dependent on, but annotated or labelled data, which relies on vast amounts of manpower and labour. This is where the data labelling industry and business process outsourcing companies have emerged as such important cogs in the AI machine, as they provide labelled datasets for machine learning models to learn from. 

We’ve seen firms such as Scale AI and Sama, to name just two, gain success with huge valuations by providing labelled data quickly and cheaply. But AI’s secret, or in reality not so secret sauce, is becoming increasingly dependent on unethical and exploitative working conditions and practices. 

Time has reported OpenAI was using workers in Kenya on less than $2 an hour and other firms use workers in the Philippines, Vietnam and Venezuela, on even worse pay at barely 90 cents an hour. They work in atrocious conditions with intrusive CCTV systems monitoring worker’s performance. A further damning report found that, “a timer ticked away at the top left of the screen, without a clear deadline or apparent way to pause it to go to the bathroom." Another unfortunate occurrence is that there appears to be a lot of unpaid work that many large companies refuse to address. Training on labelling platforms, learning how to do something, fixing mistakes or providing samples for these large customers is often left unpaid. These dynamics are commonplace in what are now being labelled as click farms. 

So it has to be now that the sector actively takes a stand against such practices to avoid this race to the bottom. Society can undeniably experience and realise huge benefits through the continued development of AI, but to achieve this on the backs of undervalued and poorly treated workers is simply wrong. Also make no mistake about it, the current state of play means certain individuals are set to make billions of dollars off the back of these unethical practices.

For ourselves at Kognic, we require all workforce partners that we engage adhere to a strong set of ethical guidelines that provide a higher threshold of minimum pay; better working conditions for both training and production; timelines aligned with standard business schedules and calendars (e.g. a 40 hour week with holiday time off) and other important expectations such as high-speed internet connections. With thousands of data labellers working for our customer’s AI efforts - through our platform - our goal has been to elevate all stakeholders in AI with fair pay, fair conditions, fair contracts, fair representation and fair management.

The rapid rise in AI we’ve seen over recent months has led to many raising ethical questions and concerns about its advancement. This has focused on its potential to pose significant risks to humanity and society, varying from threatening people’s jobs and livelihoods to its ability to spread misinformation. However, the ethical concerns around data labelling hasn’t received adequate attention. The issue ultimately goes right to the heart of whether we use AI as a tool to improve society or not, just as much misinformation or a threat to jobs. 

If we are to create a future where AI contributes positively to society, we have to always adopt a human first approach. This technology has to be designed, developed and implemented with humans always at the forefront, yet currently we’re falling at the first hurdle. How can we trust AI to be used for the betterment of the human experience if its development is reliant on the exploitation of people it's supposed to benefit? 

Industry leaders are sorely mistaken if they think that they haven’t failed in their role to ensure they act ethically in the development of AI. Claims that external market factors mean they’re unable to adhere to basic industry standards are untrue. As a collective and as an industry, we have the power to refuse to work with click farms and actively seek out business process outsourcing companies that don’t use exploitative practices. 

Those who say the future of AI is bright are correct, but only if we take a stand against those who see cutting corners as the only way up. If not, the future of AI is set for a  race to the bottom.