AI & ML


AI tools: the good, the bad, and the ugly

29 March 2023 AI & ML

Like most ChatGPT users, I’ve been blown away by the sheer human-like quality of the language model that has been produced by OpenAI. Currently it is an amazing product. At Altron Karabina, the data and analytics team are experimenting with using the tool to verify and optimise code, such as SQL stored procedures and Python scripts. Salespeople are also using the tool to produce succinct summaries of complex products to help clients understand them better.

The good

The promise of AI tools is that they can supplement human endeavour. A lawyer, financial adviser, marketing manager or programmer who can use ChatGPT, LaMDA, or other AI tools effectively is going to be more productive, and possibly more inventive than one that cannot. AI technology has the potential to supercharge human work, and there is no doubt that the potential is there to create more jobs and more prosperity, not less.

The bad

However, technology such as ChatGPT, like all human inventions, is a double-edged sword and can be used for good or ill. ChatGPT declares these concerns itself in almost every chat. In 2019 OpenAI decided to withhold the release of GPT-2, a previous version of its language model, citing concerns about the tool, which could generate convincing news articles, being too easy to use for misinformation purposes.

There are several issues. Firstly, the tech itself can produce misleading, wrong or damaging information. AI models can be impacted negatively by the content that they learn from.

OpenAI has put in a lot of effort to try and avoid the negative effects of training language models on a large corpus of documents, such as Twitter, Wikipedia, and various sections of the internet, by having humans tag text that is violent, racist, misogynistic or otherwise unacceptable, to ensure that it doesn’t contaminate the user experience with GPT-3. It’s good to see that organisations like OpenAI can be self-regulating in this regard, but can we be sure that all AI businesses will do the same? The EU is preparing regulations that might help with this, the AI Act, expected to be made law in 2023, but the impact remains to be seen (https://artificialintelligenceact.eu/).

The ugly

The last concern is the damage that can be done in the process of creating the technology itself. As I write, Microsoft, OpenAI and GitHub are defending a class action lawsuit that alleges that the corpus of programming code that was used to train GPT-3 contains licenced code that should not be profited from. Matthew Butterick, who filed the suit, states that the creation of CoPilot, GitHub’s GPT-3 based coding assistant, is “software piracy on an unprecedented scale”.

As a throwback to the previous concern, the coding Q&A; site, Stack Overflow, has banned AI-generated answers to programming questions, saying “these have a high rate of being incorrect”. This concern must also apply to the millions of non-coding articles and books that GPT-3 has been trained on – who really owns the poems, scripts and movie scripts generated by ChatGPT?

In addition, Time reported in January 2023 that the very act of labelling some of the internet’s most offensive content to reduce any potential toxic output from GPT-3, has also caused damage. This work is generally outsourced to countries where labour is cheap, and a Kenyan company that paid workers less than $2 an hour for the task of labelling troubling material, seems to have had issues with employees who allege emotional trauma dealing with the nature of the content they vetted. OpenAI is by no means the only AI company that uses low-cost labour for this purpose – last year, Time published another story about the same Kenyan company performing labelling work for Meta in the article ‘Inside Facebook’s African Sweatshop’.

The verdict

The impact of AI is far-reaching. Like many human inventions since fire itself, we will have to guard it carefully to ensure that it keeps us warm, rather than burning out of control.

Legislation will slowly appear that will help to curb some of these issues with AI tools, but in the meantime, if AI solutions are being deployed in a business, it may be worth adding an additional ethics gate to the development process to debate these risks before release.




Share this article:
Share via emailShare via LinkedInPrint this page

Further reading:

NXP has expanded its MCX A Series
Altron Arrow AI & ML
NXP has significantly expanded its MCX A Series of Arm Cortex-M33 microcontrollers, doubling the portfolio with six new families aimed at industrial and IoT edge applications.

Read more...
AI-ready embedded SBC
AI & ML
The new Grinn GenioBoard SBC provides a production-ready implementation of a powerful eight-core MediaTek processor, backed by high-speed interfaces, a Linux distro, and CRA-ready security software.

Read more...
Alif Semiconductor elevates generative AI at the edge
AI & ML
Developers can now use the ExecuTorch Runtime for AI applications built to run on its Ensemble E4/E6/E8 series of MCUs and fusion processors.

Read more...
Questing for the quantum AI advantage
Editor's Choice AI & ML
Two quantum experts disclose high hopes and realities for this emerging space.

Read more...
How a vision AI platform and the STM32N6 can turn around an 80% failure rate for AI projects
Altron Arrow AI & ML
he vision AI platform, PerCV.ai, could be the secret weapon that enables a company to deploy an AI application when so many others fail.

Read more...
Infineon’s OPTIGA for more secure AI and ML models
Future Electronics AI & ML
Infineon Technologies provides its OPTIGA Trust M security solution to Thistle Technologies for embedded computing products.

Read more...
Altron Arrow introduces GX10 supercomputer
Altron Arrow AI & ML
Powered by the NVIDIA GB10 Grace Blackwell superchip, this is desktop-scale AI performance previously only available to enterprise data centres.

Read more...
Photonic chip to slash AI energy consumption
AI & ML
Arago, a Paris and Silicon Valley-based deeptech startup pioneering a new class of energy-efficient AI chips powered by light, has raised $26 million in seed funding to accelerate the commercialisation of its photonic processor.

Read more...
Exploring AI
AI & ML
Powered by an ESP32-S3 MCU with 512kB SRAM and 16MB Flash, the Unihiker K10 features a vibrant 2,8-inch colour screen, built-in Wi-Fi and Bluetooth 5.0, and a 2-megapixel camera.

Read more...
Empowering innovation with ST’s AI processors
Altron Arrow AI & ML
Artificial intelligence is no longer just a futuristic concept – it is here, and it is transforming industries at an unprecedented pace.

Read more...









While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd | All Rights Reserved