AI More In-Depth
When we talk about AI, now, it seems like it is used everywhere. Google Cloud writes it as, "Artificial intelligence is a broad field, which refers to the use of technologies to build machines and computers that have the ability to mimic cognitive functions associated with human intelligence, such as being able to see, understand, and respond to spoken or written language, analyze data, make recommendations, and more" (Google Cloud). This means that AI is overall, just a means of mimicking human behavior. It is a complex system of technologies within a product to detect errors, and learn, or find the best response for the prompt it was given. Google Cloud mentions that machine learning is rather a specific type of AI that enables systems to learn and add on to what it previously knows. It is more specific to using algorithms, analyzing and compiling data to learn and find the "best" or most average answer to answer a prompt. The main thing that separates them is that machine learning teaches a machine how to do things and learn from those experiences based on patterns.
When it comes to applications, these both can be beneficial. The idea here is that only those who have qualifications and the applicable skills should be the ones that handle this technology. To cite Raju's Great Learning article, the author writes about how there are skills necessary to handle AI on the business and industrial side of things. Skills such as knowing programming and its respective languages, math, and a deeper level understanding of Machine Learning. (Raju, 2023). The problem when it comes to AI use comercially, is that many users are not in the same level of expertise and use these open AI programs such as ChatGPT, is that people are not qualified to use it, and therefore, do not have a deeper understanding of it. Again, reiterating the idea that an Open AI like this which uses the entire Internet, at least what it has access to, to come across to its answer. Now, when somebody uses it, that response must be stored somewhere. This leads to the issue of this AI tool getting dumber.
Leading into the next point, ChatGPT is getting less intelligent. Al-Sibai of Futurism cites a Harvard study, "'We find that the performance and behavior of both GPT-3.5 and GPT-4 vary significantly across these two releases and that their performance on some tasks have gotten substantially worse over time,' the paper noted, adding that it's 'interesting' to question whether GPT-4 is indeed getting stronger" (Al-Sibai, 2023). So what is observed is that over the versions, the ChatGPT system is seemingly getting less accurate. They tested to see whether or not it can perform the same actions given the same prompt across the two updates, and it did not yield the same results, in fact, being more inaccurate in the latter update. What can be inferred is that due to all of the data that is being put into it, including the one people use commercially, it is a cause of systems like ChatGPT getting dumber.