AI answers a number of questions incorrectly: ChatGPT not suitable for programming

AI answers a number of questions incorrectly: ChatGPT not suitable for programming

The text-based AI bot ChatGPT from the developer OpenAI should be known to a large audience and has now made its way into the boardrooms of large companies. ChatGPT causes great amazement at its range of capabilities and answers to user questions. However, the latter is still a major weakness of the “all-rounder”, because the answers are mostly imprecise and therefore not satisfactory, or simply wrong, which is why the human being has to remain at the end of the processing chain.

ChatGPT as a support when programming? Rather not

Purdue University from Indiana has now investigated this problem more closely and has drawn up a study on it. This comes at a scathing verdict for ChatGPT. But what was it about in detail? Purdue University researchers wanted to find out how accurately ChatGPT provides information on programming, writing code, etc. and fed the AI ​​with questions from the website Stack Overflow, an Internet forum where software developers exchange ideas. There were 517 programming questions that ChatGPT had to deal with. Several aspects were evaluated for the AI’s responses, including correctness, consistency, completeness and conciseness.

More than half of ChatGPT’s answers to the programming questions, 52 percent, are said to have been inaccurate, more than three quarters, 77 percent, were unnecessarily long. The researchers found it even more problematic that the AI’s eloquent language style often misled the participants in the study. It was difficult for them to recognize inaccuracies, insofar as the errors could not be determined obviously. Interesting and alarming at the same time: Despite incorrect answers, almost 40 percent of the participants are said to have preferred the answers from ChatGPT. Of the preferred answers, 77 percent turned out to be wrong.

Also interesting: Nvidia: New AI tool generates images “efficiently” and fits on a floppy disk

The researchers explained that ChatGPT was unable to capture contextual nuances of the questions, resulting in a number of incorrect answers. The lesson from the study is therefore that current generative AI is not suitable as a supporting tool when writing code, and can even have counterproductive effects when programming. Major companies like Samsung, Google, Apple, and Amazon have either banned or issued warnings about the use of generative AIs like ChatGPT for code suggestion.

Source: Purdue University study via Gizmochina

LEAVE A REPLY

Please enter your comment!
Please enter your name here