AI & ML


AI tools: the good, the bad, and the ugly

29 March 2023 AI & ML

Like most ChatGPT users, I’ve been blown away by the sheer human-like quality of the language model that has been produced by OpenAI. Currently it is an amazing product. At Altron Karabina, the data and analytics team are experimenting with using the tool to verify and optimise code, such as SQL stored procedures and Python scripts. Salespeople are also using the tool to produce succinct summaries of complex products to help clients understand them better.

The good

The promise of AI tools is that they can supplement human endeavour. A lawyer, financial adviser, marketing manager or programmer who can use ChatGPT, LaMDA, or other AI tools effectively is going to be more productive, and possibly more inventive than one that cannot. AI technology has the potential to supercharge human work, and there is no doubt that the potential is there to create more jobs and more prosperity, not less.

The bad

However, technology such as ChatGPT, like all human inventions, is a double-edged sword and can be used for good or ill. ChatGPT declares these concerns itself in almost every chat. In 2019 OpenAI decided to withhold the release of GPT-2, a previous version of its language model, citing concerns about the tool, which could generate convincing news articles, being too easy to use for misinformation purposes.

There are several issues. Firstly, the tech itself can produce misleading, wrong or damaging information. AI models can be impacted negatively by the content that they learn from.

OpenAI has put in a lot of effort to try and avoid the negative effects of training language models on a large corpus of documents, such as Twitter, Wikipedia, and various sections of the internet, by having humans tag text that is violent, racist, misogynistic or otherwise unacceptable, to ensure that it doesn’t contaminate the user experience with GPT-3. It’s good to see that organisations like OpenAI can be self-regulating in this regard, but can we be sure that all AI businesses will do the same? The EU is preparing regulations that might help with this, the AI Act, expected to be made law in 2023, but the impact remains to be seen (https://artificialintelligenceact.eu/).

The ugly

The last concern is the damage that can be done in the process of creating the technology itself. As I write, Microsoft, OpenAI and GitHub are defending a class action lawsuit that alleges that the corpus of programming code that was used to train GPT-3 contains licenced code that should not be profited from. Matthew Butterick, who filed the suit, states that the creation of CoPilot, GitHub’s GPT-3 based coding assistant, is “software piracy on an unprecedented scale”.

As a throwback to the previous concern, the coding Q&A; site, Stack Overflow, has banned AI-generated answers to programming questions, saying “these have a high rate of being incorrect”. This concern must also apply to the millions of non-coding articles and books that GPT-3 has been trained on – who really owns the poems, scripts and movie scripts generated by ChatGPT?

In addition, Time reported in January 2023 that the very act of labelling some of the internet’s most offensive content to reduce any potential toxic output from GPT-3, has also caused damage. This work is generally outsourced to countries where labour is cheap, and a Kenyan company that paid workers less than $2 an hour for the task of labelling troubling material, seems to have had issues with employees who allege emotional trauma dealing with the nature of the content they vetted. OpenAI is by no means the only AI company that uses low-cost labour for this purpose – last year, Time published another story about the same Kenyan company performing labelling work for Meta in the article ‘Inside Facebook’s African Sweatshop’.

The verdict

The impact of AI is far-reaching. Like many human inventions since fire itself, we will have to guard it carefully to ensure that it keeps us warm, rather than burning out of control.

Legislation will slowly appear that will help to curb some of these issues with AI tools, but in the meantime, if AI solutions are being deployed in a business, it may be worth adding an additional ethics gate to the development process to debate these risks before release.




Share this article:
Share via emailShare via LinkedInPrint this page

Further reading:

World’s most powerful open LLM
AI & ML
With a staggering 180 billion parameters, and trained on 3,5 trillion tokens, Falcon 180B has soared to the top of the Hugging Face Leaderboard for pretrained LLMs.

Read more...
Bridging the gap between MCUs and MPUs
Future Electronics Editor's Choice AI & ML
The Renesas RA8 series microcontrollers feature Arm Helium technology, which boosts the performance of DSP functions and of AI and machine learning algorithms.

Read more...
Hardware architectural options for artificial intelligence systems
NuVision Electronics Editor's Choice AI & ML
With smart sensors creating data at an ever-increasing rate, it is becoming exponentially more difficult to consume and make sense of the data to extract relevant insight. This is providing the impetus behind the rapidly developing field of artificial intelligence.

Read more...
xG26 sets new standard in multiprotocol wireless device performance
Altron Arrow AI & ML
Silicon Labs has announced its new xG26 family of Wireless SoCs and MCUs, which consists of the multiprotocol MG26 SoC, the Bluetooth LE BG26 SoC, and the PG26 MCU.

Read more...
SolidRun unveils new SoM
Altron Arrow AI & ML
SolidRun and Hailo has unveiled a game-changer for engineers and AI product developers with the launch of their market-ready SoM, which packs the cutting-edge capabilities of the Hailo-15H SoC.

Read more...
Banana Pi with NPU
CST Electronics AI & ML
The latest Banana Pi SBC, the BPI-M7, is powered by Rockchip’s latest flagship RK3588 octa-core 64-bit processor, with a maximum frequency of 2,4 GHz.

Read more...
ESP32-P4 high-performance MCU
iCorp Technologies AI & ML
Powered by a dual-core RISC-V CPU running up to 400 MHz, ESP32-P4 also supports single-precision FPU and AI extensions, thus providing all the necessary computational resources.

Read more...
AI-native IoT platform launched
EBV Electrolink AI & ML
These highly-integrated Linux and Android SoCs from Synaptics are optimised for consumer, enterprise, and industrial applications and deliver an ‘out-of-the-box’ edge AI experience.

Read more...
Flash for AI
EBV Electrolink AI & ML
SCM offers a midway latency point between DRAM and SSDs, and when coupled with the introduction of CXL, low-latency flash, such as XL-FLASH, is well-positioned to deliver improvements in price, system performance, and power consumption to everything from servers to edge devices deploying the power of AI.

Read more...
Speeding up the rollout of renewable energy with AI
AI & ML
Understanding that AI, particularly within the renewables space, will not take away jobs, but rather create them, is key to leveraging the immense power of this technology to drive South Africa forward.

Read more...