Generate Text with ChatGPT: Understanding the Capabilities and Limitations of LLMs

Mehmet Ozkaya
5 min readNov 19, 2024

--

Welcome to our hands-on exploration of ChatGPT 4.0, where we’ll dive into how to generate text using this powerful language model. We’ll focus on both its impressive capabilities and its important limitations.

Get Udemy Course with limited discounted coupon — Generative AI Architectures with LLM, Prompt, RAG, Fine-Tuning and Vector DB

Large Language Models (LLMs) like ChatGPT have revolutionized the way we interact with technology, enabling us to generate human-like text, summarize articles, answer questions, classify text, analyze sentiment, perform semantic searches, and even generate code. However, it’s crucial to understand that these models have limitations, including factual inaccuracies and a lack of real-time knowledge.

In this guide, we’ll:

  • Understand how prompts work.
  • Experience hands-on examples showcasing the model’s capabilities.
  • Discuss the limitations and how they affect the model’s responses.

The Basic Flow: Prompt → LLM → Response

Let’s start by understanding the basic interaction with an LLM like ChatGPT.

Prompt → LLM → Response

  1. Prompt: This is the input you provide to the model. Think of it as a question, instruction, or statement that guides the model’s response.
  2. LLM (ChatGPT 4.0): The model processes your prompt using patterns and information it learned during training.
  3. Response: The model generates a response based on your prompt.

What is a Prompt?

A prompt is essentially a way to “program” the model using natural language. You don’t need to write code; simply ask a question or give instructions, and the model will respond accordingly. We’ll dive deeper into prompt engineering later, but for now, remember that the prompt guides the model’s behavior.

Hands-on Examples: Exploring ChatGPT’s Capabilities

Let’s get our hands dirty with some practical examples to see ChatGPT 4.0 in action.

1. Text Generation

Prompt:

Write a short story about a brave knight and a dragon.

Response:

Once upon a time in a kingdom far away, a brave knight named Sir Roland set out to save his village from a fearsome dragon. The dragon had been terrorizing the countryside, burning crops and scaring villagers. Sir Roland, clad in shining armor, ventured into the dragon’s lair armed with only his sword and unwavering courage. After a fierce battle, he outsmarted the dragon by luring it into a trap, ultimately bringing peace back to the kingdom. The villagers rejoiced and hailed Sir Roland as a hero.

ChatGPT generates a coherent and creative story based on the prompt, showcasing its ability to produce human-like narratives.

2. Summarization

Prompt:

Summarize the following article:
"Artificial intelligence (AI) is transforming industries across the globe. From healthcare to finance, AI technologies are improving efficiency and enabling new innovations. In healthcare, AI algorithms assist doctors in diagnosing diseases more accurately. In finance, AI helps detect fraudulent transactions and manage risk. As AI continues to evolve, its impact on society will grow exponentially."

Response:

Artificial intelligence is revolutionizing various industries by enhancing efficiency and fostering innovation. In healthcare, it aids in more accurate disease diagnosis, while in finance, it helps detect fraud and manage risks. AI’s evolving capabilities promise an exponentially growing impact on society.

The model effectively condenses the key points of the article into a brief summary.

3. Completing Sentences

Prompt:

The dog ran into the yard and started to

Response:

bark excitedly at the squirrels scampering up the trees.

ChatGPT completes the incomplete sentence in a way that maintains context and coherence.

4. Question & Answer (Q&A)

Prompt:

What is the capital of Japan?

Response:

Tokyo is the capital of Japan.

The model provides a straightforward and accurate answer to the factual question. Try inputting your own factual questions and see how accurately the model responds.

5. Text Classification

Prompt:

Classify the following sentence as positive, negative, or neutral: "I really enjoyed this product."

Response:

Positive.

ChatGPT correctly identifies the sentiment expressed in the sentence. Test different sentences to see how well the model classifies their sentiment.

6. Code Generation

Prompt:

Write a Python function to calculate the Fibonacci sequence.

Response:

def fibonacci(n):
sequence = []
a, b = 0, 1
while len(sequence) < n:
sequence.append(b)
a, b = b, a + b
return sequence
# Example usage:
print(fibonacci(10))

The model generates a Python function that calculates the Fibonacci sequence, demonstrating its ability to handle technical programming tasks.

Understanding the Limitations of ChatGPT 4.0

While ChatGPT is powerful, it’s essential to be aware of its limitations.

1. Factual Inaccuracy and Hallucinations

The model may generate incorrect information or “hallucinate” facts that are not based on its training data.

Example:

Who won the World Cup in 2023?

Response:

As of my last update in September 2021, the 2023 FIFA World Cup has not occurred. The next men’s World Cup is scheduled for 2022 in Qatar.

Since the model’s knowledge cuts off at 2021, it cannot provide accurate information about events after that date. It may attempt to fill in the gaps, leading to incorrect responses.

2. Lack of Real-Time Knowledge

ChatGPT cannot access or retrieve real-time data, so it cannot provide current information.

Example:

What is the weather in New York City right now?

Response:

I’m sorry, but I don’t have access to real-time data. Please check a weather website or app for the current weather in New York City.

The model acknowledges its inability to provide live updates, highlighting its limitation in accessing current information.

Addressing Limitations with Future Features

In future updates and with features like “Function Calling,” LLMs aim to integrate with external APIs to fetch real-time data, potentially mitigating some of these limitations.

Capabilities:

  • Text Generation: Crafting stories, essays, and more.
  • Summarization: Condensing long texts into key points.
  • Question & Answer: Responding to factual queries.
  • Text Classification: Analyzing sentiment and categorizing text.
  • Code Generation: Writing code snippets in various programming languages.

Limitations:

  • Factual Inaccuracies: Potential for incorrect or made-up information.
  • Lack of Real-Time Knowledge: Inability to provide current data or events.

As powerful as LLMs are, getting the most out of them requires skillful prompting. In our next section, we’ll delve deeper into Prompt Engineering, learning techniques to optimize the model’s responses and further enhance your interactions with ChatGPT.

Get Udemy Course with limited discounted coupon — Generative AI Architectures with LLM, Prompt, RAG, Fine-Tuning and Vector DB

EShop Support App with AI-Powered LLM Capabilities

You’ll get hands-on experience designing a complete EShop Customer Support application, including LLM capabilities like Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code Generation by integrating LLM architectures into Enterprise applications.

--

--

Mehmet Ozkaya
Mehmet Ozkaya

Written by Mehmet Ozkaya

Software Architect | Udemy Instructor | AWS Community Builder | Cloud-Native and Serverless Event-driven Microservices https://github.com/mehmetozkaya

No responses yet