Like most ChatGPT users, I’ve been blown away by the sheer human-like quality of the language model that has been produced by OpenAI. Currently it is an amazing product. At Altron Karabina, the data and analytics team are experimenting with using the tool to verify and optimise code, such as SQL stored procedures and Python scripts. Salespeople are also using the tool to produce succinct summaries of complex products to help clients understand them better.
The promise of AI tools is that they can supplement human endeavour. A lawyer, financial adviser, marketing manager or programmer who can use ChatGPT, LaMDA, or other AI tools effectively is going to be more productive, and possibly more inventive than one that cannot. AI technology has the potential to supercharge human work, and there is no doubt that the potential is there to create more jobs and more prosperity, not less.
However, technology such as ChatGPT, like all human inventions, is a double-edged sword and can be used for good or ill. ChatGPT declares these concerns itself in almost every chat. In 2019 OpenAI decided to withhold the release of GPT-2, a previous version of its language model, citing concerns about the tool, which could generate convincing news articles, being too easy to use for misinformation purposes.
There are several issues. Firstly, the tech itself can produce misleading, wrong or damaging information. AI models can be impacted negatively by the content that they learn from.
OpenAI has put in a lot of effort to try and avoid the negative effects of training language models on a large corpus of documents, such as Twitter, Wikipedia, and various sections of the internet, by having humans tag text that is violent, racist, misogynistic or otherwise unacceptable, to ensure that it doesn’t contaminate the user experience with GPT-3. It’s good to see that organisations like OpenAI can be self-regulating in this regard, but can we be sure that all AI businesses will do the same? The EU is preparing regulations that might help with this, the AI Act, expected to be made law in 2023, but the impact remains to be seen (https://artificialintelligenceact.eu/).
The last concern is the damage that can be done in the process of creating the technology itself. As I write, Microsoft, OpenAI and GitHub are defending a class action lawsuit that alleges that the corpus of programming code that was used to train GPT-3 contains licenced code that should not be profited from. Matthew Butterick, who filed the suit, states that the creation of CoPilot, GitHub’s GPT-3 based coding assistant, is “software piracy on an unprecedented scale”.
As a throwback to the previous concern, the coding Q&A; site, Stack Overflow, has banned AI-generated answers to programming questions, saying “these have a high rate of being incorrect”. This concern must also apply to the millions of non-coding articles and books that GPT-3 has been trained on – who really owns the poems, scripts and movie scripts generated by ChatGPT?
In addition, Time reported in January 2023 that the very act of labelling some of the internet’s most offensive content to reduce any potential toxic output from GPT-3, has also caused damage. This work is generally outsourced to countries where labour is cheap, and a Kenyan company that paid workers less than $2 an hour for the task of labelling troubling material, seems to have had issues with employees who allege emotional trauma dealing with the nature of the content they vetted. OpenAI is by no means the only AI company that uses low-cost labour for this purpose – last year, Time published another story about the same Kenyan company performing labelling work for Meta in the article ‘Inside Facebook’s African Sweatshop’.
The impact of AI is far-reaching. Like many human inventions since fire itself, we will have to guard it carefully to ensure that it keeps us warm, rather than burning out of control.
Legislation will slowly appear that will help to curb some of these issues with AI tools, but in the meantime, if AI solutions are being deployed in a business, it may be worth adding an additional ethics gate to the development process to debate these risks before release.
For more information visit https://altronkarabina.com/
© Technews Publishing (Pty) Ltd | All Rights Reserved