Skip to Main Content

Chat GPT - AI Literacy

Best practices for using Chat GPT in your Research

Quotes from the CEO

Twitter x new logo x rounded - Social media & Logos IconsTweets from the CEO of OpenAI, Sam Altman:

"ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness."   

"it does know a lot, but the danger is that it is confident and wrong a significant fraction of the time" 

 

ChatGPT has certainly grown and improved since its launch.  Regardless, as exciting and groundbreaking as ChatGPT is, there are limitations users need to be aware of. 

Limitations

ChatGPT's ability to "hallucinate" or make up incorrect results lead to nonsense results approximately 3% of the time (New York Times, Nov. 16, 2023, Chatbots May ‘Hallucinate’ More Often Than Many Realize, Cade Metz).  

We must keep ChatGPT's limitations in mind as we use this tool.

According to Forbes (2023),  The Top 10 Limitations Of ChatGPT are:

  1. Lack of common sense: While ChatGPT can generate human-like responses and has access to a large amount of information, it does not possess human-level common sense — and the model also lacks the background knowledge we have. This means that ChatGPT may sometimes provide nonsensical or inaccurate responses to certain questions or situations.
  2. Lack of emotional intelligence: While ChatGPT can generate responses that seem empathetic, it does not possess true emotional intelligence. It cannot detect subtle emotional cues or respond appropriately to complex emotional situations.
  3. Limitations in understanding context: ChatGPT has difficulty understanding context, especially sarcasm and humor. While ChatGPT is proficient in language processing, it can struggle to grasp the subtle nuances of human communication. For example, if a user were to use sarcasm or humor in their message, ChatGPT may fail to pick up on the intended meaning and instead provide a response that is inappropriate or irrelevant.
  4. Trouble generating long-form, structured content: At this time, ChatGPT has some trouble generating long-form structured content. While the model is capable of creating coherent and grammatically correct sentences, it may struggle to produce lengthy pieces of content that follow a particular structure, format, or narrative. As a result, ChatGPT is currently best suited for generating shorter pieces of content like summaries, bullet points, or brief explanations.
  5. Limitations in handling multiple tasks at the same time: The model performs best when it’s given a single task or objective to focus on. If you ask ChatGPT to perform multiple tasks at once, it will struggle to prioritize them, which will lead to a decrease in effectiveness and accuracy.
  6. Potentially biased responses: ChatGPT is trained on a large set of text data — and that data may contain biases or prejudices. This means the AI may sometimes generate responses that are unintentionally biased or discriminatory.
  7. Limited knowledge: Although ChatGPT has access to a large amount of information, it is not able to access all of the knowledge that humans have. It may not be able to answer questions about very specific or niche topics, and it may not be aware of recent developments or changes in certain fields.
  8. Accuracy problems or grammatical issues: ChatGPT's sensitivity to typos, grammatical errors, and misspellings is limited at the moment. The model may also produce responses that are technically correct but may not be entirely accurate in terms of context or relevance. This limitation can be particularly challenging when processing complex or specialized information, where accuracy and precision are crucial. You should always take steps to verify the information ChatGPT generates.
  9. Need for fine-tuning: If you need to use ChatGPT for very specific use cases, you may need to fine-tune the model to get what you need. Fine-tuning involves training the model on a specific set of data to optimize its performance for a particular task or objective, and can be time-consuming and resource-intensive.
  10. Computational costs and power: ChatGPT is a highly complex and sophisticated AI language model that requires substantial computational resources to operate efficiently — which means running the model can be expensive and may require access to specialized hardware and software systems. Additionally, running ChatGPT on low-end hardware or systems with limited computational power can result in slower processing times, reduced accuracy, and other performance issues. Organizations should carefully consider their computational resources and capabilities before using ChatGPT.

 

Additional Considerations:

  • Be aware of OpenAIs Privacy Policy and understand how they may use your personal data. 
  • ChatGPT 3.5 doesn’t acquire new information connected to the internet and doesn’t function like a standard search engine. It was trained in 2021, using a finite block of data. It is not collecting new information, like Google’s search engine spiders do every second. Therefore, it is automatically limited in content because it isn’t cross-referencing academic databases, Google Scholar, and the wider web in real time. It will be weak in a number of subject areas and useless for information post-2021. This affects its value to STEM academics and researchers, or anyone at the cutting edge of a field (from Choice360, ChatGPT as a Tool for Library Research – Some Notes and Suggestions, Feb. 19, 2024). 
  • It may not have access to information behind paywalls (such as scholarly journal articles).