12th July’23, New Jersey: Google’s AI chatbot, Bard, relies on thousands of outside contractors, including companies like Appen and Accenture, to ensure the accuracy and quality of its responses. These contractors, who often work for low wages and receive minimal training, face intense pressure and tight deadlines as they review and improve the chatbot’s answers. They assess various topics, from medication dosages to state laws, without always having specific expertise.
The contractors have raised concerns about their working conditions, which they believe negatively impact the quality of the AI’s responses. They fear that the fast-paced and stressful environment fosters a culture of fear rather than teamwork. Some contractors have even expressed worries that the speed required to review content could make Bard a faulty and dangerous product.
Google has prioritized AI and positioned its products as valuable resources in various fields, including health and education. However, the contractors argue that their working conditions hinder their ability to deliver high-quality results. They often encounter convoluted instructions and face short deadlines for auditing answers. The guidelines they follow when assessing the AI’s responses can be subjective and lack rigorous fact-checking.
Although Google claims to prioritize accuracy and reduce biases, the contractors believe their workload has increased as the company competes with OpenAI. Workers report assessing high-stakes topics, such as medication dosages, without adequate expertise or support. They fear that even minor inaccuracies can undermine the trustworthiness of the chatbot and exacerbate misleading information.
The contractors responsible for improving Google’s AI products face job security and communication challenges. They are often unaware of the source of AI-generated responses or where their feedback goes. Moreover, the precarious nature of their employment, low wages, and lack of direct communication with Google contribute to concerns about labor exploitation.
The contractors argue that the vast scope of topics covered by AI chatbots like Bard is unrealistic and raises ethical questions. Emily Bender, a professor of computational linguistics at the University of Washington, said the work of these contract staffers at Google and other technology platforms is “a labor exploitation story,” pointing to their precarious job security and how some of these kinds of workers are paid well below a living wage. “Why should the same machine that can give you the weather forecast in Florida also be able to give you advice about medication doses?” she asked. “The people behind the machine who are tasked with making it be somewhat less terrible in some of those circumstances have an impossible job.”
In conclusion, Google’s reliance on outside contractors to improve the accuracy and quality of its AI chatbot, Bard, has raised concerns about the contractors’ working conditions and the impact on the chatbot’s performance. The contractors face low wages, minimal training, and intense pressure to meet tight deadlines. They believe these conditions compromise the quality of the AI’s responses and contribute to a culture of fear and stress. The wide range of topics the chatbot covers and the subjective nature of assessing its responses further complicate the contractors’ work. The challenges these workers face underscore the ethical and labor issues surrounding the development and deployment of AI technologies.