As I sit in front of my computer and ponder the year that is swiftly coming to a close, I again marvel at everything that has happened. It has definitely been a busy year. There have been plenty of announcements in the tech space, flooding my inbox, especially when it comes to artificial intelligence and machine learning.
No matter who you speak to, they will have heard something about AI and the effects it will have on our everyday life. And they all have their own opinions too. Many of them, however, are not based on any sort of scientific fact.
One opinion piece that did cross my desk this month was an article on training of AI models. Many people do not realise that AI is trained on human-generated information. This mostly comes from information that already exists on the internet in some form: articles that have been written, responses to questions, shopping habits, browsing history; pretty much anything is fair game for use in the training of AI models.
But there is a finite amount of non-repetitive information that exists. Let me clarify what I mean by this. Much of the enormous amount of information created each day is a copy of data/information that already exists. People reposting text, images and other multimedia, copying of other data from one platform to another; all this counts towards the total amount of information created.
Yes, there is still a massive amount of information available for training, but AI models are becoming more and more powerful, and are able to be trained on larger volumes of data. This year, even though the total amount of data generated globally was in the order of 120 zettabytes, much of this cannot be used for training models. ChatGPT was trained on 570 gigabytes of data, which amounted to around 300 million words. The more data used to train these AI models, the more accurate the models’ responses will become.
And this is where the concern starts to kick in for AI researchers; the volume of datasets needed to train AI models is growing much more rapidly than the growth of online data stocks. If the current training trend continues, in a paper published in 2022 it was predicted that we will run out of high-quality data before 2026. If models then turn to the remaining low-quality data, this will also be exhausted, sometime between 2030 and 2050.
But do we really want to train our models on low-quality data? We all know the bad decisions that can be made when only poor-quality data is available. After all, the internet is full of examples of ‘average’ people doing stupid things based on lack of insight or forethought. Do we really want our artificial intelligences to be only as smart as the average person?
One hope is that newer AI models will have a lower data overhead, that is, to be able to be trained suitably well using less data than their predecessors. I believe this would be similar to how many people get to conclusions nowadays – they are able to make quite reasonable decisions even when they do not know everything about a subject.
The one overriding thing I have taken away from all this talk about AI during this year is that we are certainly all living in an interesting and exciting era, even if it can be quite concerning at times.
To all our readers I would like to take this opportunity to wish you all a joyous and restful season. May your new year be filled with new goals, new achievements and above all, happiness.
Tel: | +27 11 543 5800 |
Email: | malckey@technews.co.za |
www: | www.technews.co.za |
Articles: | More information and articles about Technews Publishing |
© Technews Publishing (Pty) Ltd | All Rights Reserved