Since the public release of ChatGPT in November 2022, AI has quickly become a topic on many businesses and legislators’ minds.  

Discussions over AI regulation have largely concerned potential “doomsday scenarios” and the regulatory stifling of technological innovation. ChatGPT creator, OpenAI, even offered a $100,000 grant for ideas on how to shape AI regulation. 

If GlobalData estimates are correct, the total global market of AI is set to be worth over $984bn by 2030 and has already disrupted many sectors from healthcare to finance. 

Whilst research published this August by the UN has confirmed that AI will disrupt employment, its International Labour Organisation was also concerned about the “microtasks” necessary to run AI. 

“Microtask” work consists of tagging data or providing feedback on the answers generated by AI to ensure the quality of such systems. 

Whilst the report itself states that there is no official figure on the number of global microtask workers, the UN does estimate that this could be a global workforce of 9 million people. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Many microtask workers, as pointed out by the report, are hired through crowdsourcing websites or subcontractors. Workers hired through crowdsourcing websites are often paid by the number of tasks completed and oftentimes do not benefit from “labour protections or social security benefits that come with the employment relationship.” 

Speaking last week (14 September) to a rater for Google Bard, Verdict heard first-hand about the strict working conditions for these workers. 

A member of the Alphabet Workers Union, Ed Stackhouse, stated that Google’s raters are “explicitly excluded” from the company’s $15 minimum wage and are often subjected to the threat of punishment if they do not complete a sufficient amount of daily tasks.  

According to Stackhouse, the “AI underclass already exists” behind many popular chatbots breaking the internet.  

Despite predictions that AI will soon be a $984bn market, Stackhouse alleges it is common for raters to be underpaid without explanation and the workforce are often “isolated from one another” by Google. 

Stackhouse emphasised that the quality of AI tools is “entirely dependent on the quality of working conditions shaping these technologies.”  

“If Google can treat us without standards,” Stackhouse explained, “then we are driving towards a future where any and all workers are denied dignified wages and working conditions.” 

US representatives have already written a letter to Big Tech regarding these marginalised workers

As of writing, none of the nine addressees (including OpenAI, Microsoft and Meta) have responded to the letter. 

Verdict spoke to Ron Moscona, partner at Dorsey & Whitney law firm, on the logistics of creating AI regulations to protect workers like Stackhouse. 

“In most cases,” Moscona begins, “the law is unlikely to recognise the person who develops or trains an AI tool as author of works that the tool will generate. 

“Unlike traditional software development, the work to train an AI does not involve writing code, therefore, the engineers or scientists responsible for training the AI system may not be considered authors of the AI tool,” Moscona explains. 

Looking forward, Moscona believed that changes to intellectual property laws may be “appropriate” to extend some form of protection similar to copyright towards AI software that has been developed by training and computer learning. 

However, Moscona did warn that providers of current AI are likely to “keep the code under their control and not to make it available to make it available to users, thus preventing unauthorised reproduction and use through physical rather than legal controls.” 

In practice however, Moscona notes that companies usually ensure that they alone own the copyrights of any software, leaving individual developers and raters “rarely recognised.” 

Speaking on AI regulation from a tech perspective, chief learning strategist at Degreed, Annee Bayeux, emphasised the need for a “people-centric” approach to regulating AI. 

“Doomsday situations and everyday rights,” she explained, “are not mutually exclusive.” 

“We imagine the worst-case scenario and write legislation on based on this because we are trying our best to protect workers’ rights,” Bayeux continued. 

Bayeux pointed out that emerging technologies have always come with a risk to workers’ rights and employment displacement, but AI has created this problem on a previously unthinkable scale. 

“AI regulation can protect us from the improper use of the technology- for all aspects of society, as a worker, a consumer, and a citizen,” Bayeux concluded.