If you’re new to programming or just getting started with Python, integrating a conversational AI like OpenAI’s ChatGPT may sound a bit daunting. Don’t worry—in this guide, we’ll walk you through every step at a gentle pace, explaining each term and command so you can follow along even if this is your first project.
Prerequisites
Before diving in, make sure you have:
1. Python 3.7 or newer
- Download and install from python.org.
- During installation on Windows, check “Add Python to PATH” so you can run
python
from your command line.
- A free OpenAI API key
- Sign up or log in at platform.openai.com.
- Click API Keys → Create new secret key.
- Copy the key—you’ll need it in the next step.
- A command-line or terminal window
- Windows: Search for “Command Prompt” or “PowerShell.”
- macOS/Linux: Open the built-in Terminal app.
Create and Activate a Virtual Environment
Isolating your project’s dependencies ensures your global Python setup stays clean:
# macOS/Linux
python3 -m venv venv
source venv/bin/activate
# Windows (PowerShell)
python -m venv venv
venv\Scripts\Activate.ps1
Code Explanation
python3 -m venv venv
: Creates a new virtual environment namedvenv
in your project folder.source venv/bin/activate
(macOS/Linux) orvenv\Scripts\Activate.ps1
(Windows): Activates the virtual environment so subsequentpip
installs go insidevenv
instead of your global Python.- Your prompt will show
(venv)
, indicating you’re working inside the sandboxed environment.
Once activated, install required packages:
pip install openai python-dotenv
Code Explanation
pip install openai python-dotenv
: Installs the OpenAI client library and thepython-dotenv
package for loading environment variables from a.env
file.
Securely Store Your API Key
Hard-coding keys in your code is risky. Instead, set an environment variable or use a .env
file with python-dotenv
:
- Create a
.env
file in your project root withOPENAI_API_KEY="sk-…your_secret_key…"
- Load it in your script:
from dotenv import load_dotenv
import os
load_dotenv() # reads .env into environment
openai.api_key = os.getenv("OPENAI_API_KEY")
Code Explanation
from dotenv import load_dotenv
/load_dotenv()
: Reads key-value pairs from.env
intoos.environ
.os.getenv("OPENAI_API_KEY")
: Fetches the value ofOPENAI_API_KEY
from environment variables.openai.api_key = ...
: Configures the OpenAI client to use your secret key.
Write Your First Python Script
Create a file named chat.py
and open it in your editor:
import os
import openai
from dotenv import load_dotenv
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
messages = [
{"role": "system", "content": "You are a friendly coding tutor."},
{"role": "user", "content": "How do I print 'Hello, world!' in Python?"}
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
temperature=0.5,
max_tokens=150
)
print(response.choices[0].message.content)
Code Explanation
messages
list: Defines the conversation; the"system"
message sets the assistant’s behavior,"user"
is your prompt.openai.ChatCompletion.create(...)
: Sends your messages to the ChatGPT API.model
: Chooses the GPT model (e.g.,gpt-3.5-turbo
).temperature
: Controls randomness—0.5 is balanced.max_tokens
: Limits the response length.
response.choices[0].message.content
: Extracts the assistant’s reply text from the API response.
Inspecting and Understanding the Response
The raw response is a JSON-like object. You can inspect its structure to monitor usage and costs:
print(response) # full JSON
print("Reply:", response.choices[0].message.content)
print("Prompt tokens:", response.usage.prompt_tokens)
print("Completion tokens:", response.usage.completion_tokens)
print("Total tokens:", response.usage.total_tokens)
Code Explanation
response.usage.prompt_tokens
: Number of tokens sent in your prompt (counts toward cost).response.usage.completion_tokens
: Tokens generated by the model in its reply.response.usage.total_tokens
: Sum of prompt and completion tokens, used for billing.
Seeing the Assistant “Type” in Real Time
To stream tokens as they arrive—ideal for chat UIs:
for chunk in openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
stream=True
):
delta = chunk.choices[0].delta
if delta.content:
print(delta.content, end="", flush=True)
Code Explanation
stream=True
: Instructs the API to return partial results as they’re generated.- Each
chunk
contains adelta
object with new text (delta.content
). print(..., end="", flush=True)
: Prints tokens one by one without newlines, creating a typing effect.
Handling Errors and Retries
Network hiccups or rate limits can happen. Wrap your call in error handling:
import time
from openai.error import RateLimitError, OpenAIError
def safe_chat(messages):
for attempt in range(3):
try:
return openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=messages)
except RateLimitError:
wait = 2 ** attempt
print(f"Rate limited. Retrying in {wait}s…")
time.sleep(wait)
except OpenAIError as e:
print("API error:", e)
break
return None
Code Explanation
- Retries up to 3 times on
RateLimitError
, doubling the wait each time (1s, 2s, 4s
). - Catches other
OpenAIError
s to log and exit gracefully. - Returns
None
if all attempts fail.
A Beginner-Friendly Chatbot Loop
Combine everything into an interactive script cli_chat.py
:
import os, openai, time
from dotenv import load_dotenv
from openai.error import RateLimitError
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
def chat_loop():
history = [{"role": "system", "content": "You help beginners learn Python."}]
print("Welcome to ChatGPT! (type 'exit' to quit)\n")
while True:
user_input = input("You: ")
if user_input.lower() in ("exit", "quit"):
print("Goodbye!")
break
history.append({"role": "user", "content": user_input})
# Safe call with retry
for attempt in range(3):
try:
resp = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=history)
break
except RateLimitError:
time.sleep(2 ** attempt)
else:
print("Failed to get response. Please try again later.")
continue
reply = resp.choices[0].message.content
print("ChatGPT:", reply)
history.append({"role": "assistant", "content": reply})
if __name__ == "__main__":
chat_loop()
Code Explanation
- Maintains
history
of messages so ChatGPT retains context. - Prompts the user in a loop until they type “exit.”
- Uses the
safe_chat
retry logic inline to handle rate limits gracefully. - Prints each reply and appends it to
history
for context.
Customizing Your Prompts
Experiment to shape ChatGPT’s behavior:
- System message:
{"role": "system", "content": "You are an expert Python tutor and explain in simple terms."}