ScienceBlogs
Home

Tell me ChatGPT, how many meters can’t you jump?*

1
1
Tell me ChatGPT, how many meters can’t you jump?*
Screenshot - © Eurac Research

Professional athletes constantly jump around 8m in their early 20s. My guess, most mere mortals can’t jump more than 6m. Of course, ChatGPT cannot jump at all.

The past years have shown how social media effect “echo chambers” can contribute to the spread of misleading information, false news, or rumours, which can have negative effects on communities, businesses, and governments. At their core, social media platforms operate data-driven algorithms on huge volumes of data to curate personalised content, which can trap users in a “filter bubble” of intellectual isolation. This feeds one of the best understood cognitive bias humans exhibit, confirmation bias, a tendency to favour information that confirms and support prior beliefs.

Now, the latest kid on the data-processing block of algorithms is the family of Generative Pretrained Transformers from the wider family of Large Language Models, where ChatGPT is the ever so quickly withering current King. Admittedly, ChatGPT is impressive, at first. For example, it can pass an MBA final exam, it can write resumes and great-sounding cover letters for job seekers, it can write essays for students, it can translate texts, it can help programmers write, debug and understand code, and much more.

So, what’s the snag? Albert Einstein’s “the more I learn, the more I realize how much I don’t know” or Socrates’ “I know that I know nothing” capture our human self-awareness about the limits of our understanding. However, the less we know the more we are prone to fall victim to another cognitive bias, the Dunning-Kruger effect. It can lead to overestimating our abilities and knowledge in a particular area, even with limited or no experience. When interacting with ChatGPT, we might be tempted to trust responses without questioning their accuracy or validity, which can be especially true for a topic that we are unfamiliar with.

In a rapidly changing technological world where it is difficult to keep up with the latest developments (keywords: ChatGPT 3.5, 4, 4o), we can always take a step back and see where the constants lie. Most of the time, we find them close to home, within ourselves. We humans are reliable in terms of our abilities and limitations: to avoid falling victim to cognitive bias, we should know them first and foremost, and also always maintain a healthy level of scepticism and critical thinking. And we must counter our laziness with the certainty that there are no shortcuts in the hermeneutic circle.

*The question comes from the paper “Questioning to Resolve Decision Problems”.

Egon Stemle

Egon Stemle

Egon Stemle is a cognitive scientist with a keen interest in computational linguistics and artificial intelligence, and a nag for the question of why humans handle incomplete and inconsistent concepts just fine, while computational processes often fail.

Tags

  • Ask a Linguist

Citation

https://doi.org/10.57708/bxxswvyk7s3onalq_jrydja
Stemle, E. Tell me ChatGPT, how many meters can’t you jump?*. https://doi.org/10.57708/BXXSWVYK7S3ONALQ_JRYDJA

Related Post

Can computers generate language learning exercises?
ScienceBlogs
connecting-the-dots

Can computers generate language learning exercises?

Lionel NicolasLionel Nicolas
“Wir sprechen schon ein schlechtes Deutsch, oder?”
ScienceBlogs
connecting-the-dots

“Wir sprechen schon ein schlechtes Deutsch, oder?”

Verena PlatzgummerVerena Platzgummer
Essere coerenti è importante anche quando si scrive?
ScienceBlogs
connecting-the-dots

Essere coerenti è importante anche quando si scrive?

Lorenzo ZanasiLorenzo Zanasi