Wednesday, Jul 17, 2024
Testing with ChatGPT: How Can It Help with Exploratory Testing?
Software testing is a crucial part of the software development life cycle. In this article, I will focus on exploratory testing, one of the most widely used methods in manual software testing, known for its flexibility and effectiveness in uncovering hidden issues. It’s about diving into the unknown, investigating how software behaves under various conditions, and finding hidden issues that more rigid, scripted testing might miss. Exploratory testing can be exciting and challenging - however, it requires quick thinking, adaptability, and the ability to document often chaotic processes and findings simply and in great detail. In my work as a manual software tester, ChatGPT is a very helpful assistant, especially when I need support during my testing endeavors.
In this article, I’ll share examples of how ChatGPT has changed my approach to testing by helping me quickly learn about new topics, generate creative test ideas, and improve documentation and communication. I’ll also include examples of prompts I’ve used and the responses from ChatGPT - here I have to note that prompts have been adjusted to be more general and simple for understanding - courtesy of The ChatGPT himself.
Deep Dives into The New
As a tester working in a rapidly evolving field, I often deal with features or systems that are unfamiliar to me. Because of that, it’s crucial to be up to speed and learn quickly so I can conduct meaningful tests and assess potential vulnerabilities. ChatGPT is my go-to tool for fast and reliable information gathering and learning about new topics.
Example Scenario
While testing a payment processing feature, I encountered issues related to handling failed transactions, such as timeouts or declined payments. I needed to understand how payment systems typically handle these errors to ensure my testing would cover edge cases like double charges or incorrect transaction statuses.
**Prompt Example**
_I’m testing a payment processing feature in an application.
Can you explain how payment gateways typically handle failed transactions,
such as timeouts, declined payments, or network interruptions?
What are some common failure points or vulnerabilities I should focus on in
my testing, particularly around issues like preventing double charges,
handling partial payments, or ensuring accurate transaction statuses in
case of failure?_
**ChatGPT’s Answer:**
Payment gateways typically handle failed transactions through a series of
steps to ensure transaction integrity and user security. For timeouts or
network interruptions, many systems will either cancel the transaction
entirely or flag it as incomplete, requiring a retry.
*Common failure points include*:
- **Double charges**: Ensuring that retries don't accidentally result in
the user being charged twice.
- **Partial payments**: Handling cases where part of the transaction
succeeds (e.g., funds deducted from the user’s account) but the
merchant doesn't receive confirmation.
- **Transaction statuses**: Ensuring that accurate statuses are reflected
in both the user's and merchant's systems, even when the transaction fails.
**Testing scenarios** to focus on:
- Simulating network interruptions during the payment process.
- Testing how the system handles duplicate requests or retries.
- Investigating how the system handles declined payments or expired cards.
Pay attention to any **logs or audit trails** to ensure that the system
records failed attempts properly.
This answer helped me simulate real-world transaction failures, focusing my testing on the payment system’s resilience under unusual circumstances.
Brainstorming Creative Test Cases
Exploratory testing thrives on creativity and the ability to think beyond the normal use cases. ChatGPT helps me brainstorm test ideas that push beyond typical user interactions, revealing potential issues that may otherwise go unnoticed.
Example Scenario
I was testing a file upload feature and had already tested common scenarios like uploading large files or unsupported formats. I needed more ideas for nuanced edge cases that might break the system.
**Prompt Example**
_What are some edge cases to consider when testing a file upload feature
in a web application, including more technical issues like metadata
corruption or interrupted uploads?_
**ChatGPT’s Answer:**
Here are several edge cases to consider when testing a file upload feature:
- **Metadata corruption**: Test files with incorrect or corrupted metadata
(such as wrong file extension or missing metadata).
- **Interrupted uploads**: Simulate network interruptions during the
upload process to see how the system handles incomplete files.
- **Duplicate files**: Attempt to upload the same file multiple times to
check how duplicates are handled.
- **Maximum file size limits**: Try uploading files that are just below,
at, or above the allowed size limits.
- **Unsupported formats**: Upload files in uncommon or unsupported formats
(e.g., raw camera files) and observe the system’s error handling.
- **Special characters in file names**: Upload files with special characters
or non-Latin characters in the file name and check for encoding or
handling issues.
This answer helped me extend my testing approach and find potential vulnerabilities in the file upload feature that I hadn’t initially considered.
Clear and Efficient Communication
One of the biggest challenges in exploratory testing is effectively documenting your findings in a way that is both detailed and easy for others to understand. With ChatGPT’s help, I can quickly draft bug reports and communicate issues to the team in a concise manner.
Example Scenario
After discovering a race condition in a multi-user system, I needed to write a clear bug report for the development team. The bug only occurred under specific conditions, so I had to ensure that my report was both detailed and easy to follow.
**Prompt Example**
_I found a race condition bug in a multi-user system, where the issue only
occurs when two or more users interact with the same feature simultaneously.
Can you help me draft a clear and detailed bug report? Include information
on how to reproduce the bug, the exact conditions under which it happens,
the expected behavior, and the actual result when the bug occurs._
---
**ChatGPT’s Answer:**
Here’s a structure for your bug report:
- **Bug Title**: Race Condition in Multi-User System During Concurrent
Feature Use
- **Steps to Reproduce**:
1. Open the feature in two separate user sessions.
2. Both users perform action X simultaneously (describe the action).
3. Observe how the system processes both actions at the same time.
- **Expected Behavior**: The system should process both user actions
independently without affecting each other.
- **Actual Behavior**: One of the users’ actions is either ignored or
delayed, resulting in inconsistent system behavior.
- **Reproducibility**: 80% of the time when two users perform the action
simultaneously.
- **Impact**: This issue can cause data corruption or inconsistent system
states in high-traffic environments.
This format allowed me to quickly structure a clear bug report, ensuring the development team understood the issue and could begin investigating.
Documenting Exploratory Testing Process
Exploratory testing often requires documenting a very flexible process, sometimes like trying to write down your messy or raw ideas on a piece of paper, which can be challenging. ChatGPT helps me organize my thoughts and findings into structured reports that can be used later for regression testing or learning.
Example Scenario
After completing exploratory testing for a new feature, I needed to document my approach, including which edge cases I tested, the testing environment, and the results.
**Prompt Example**
_I’ve completed an exploratory testing session for a new feature in our
application, and I need help organizing the documentation. I want to
include an overview of what I tested, the edge cases I considered, the
environment I used (e.g., different browsers, devices), and the results of
my testing, including any bugs I found._
**ChatGPT’s Answer:**
Here’s a suggested outline for your exploratory testing report:
- **Feature Tested**: [Feature name and description]
- **Test Environment**: List the browsers, devices, and operating systems
used during testing.
- **Test Approach**:
- Exploratory testing with a focus on [specific areas of the feature].
- Key edge cases tested (e.g., boundary value analysis, invalid inputs,
extreme conditions).
- **Results**:
- Number of bugs found: [List the bugs].
- Bug severity: Critical, Major, Minor
- Description of any performance issues or unexpected behaviors encountered.
- **Conclusion** : Summary of the feature’s overall stability based on testing.
This structure provided a clear and simple way to document my testing session and ensured that nothing was missed in the report.
Streamlining Communication with the Team
Clear communication is essential when reporting bugs or clarifying requirements with clients. ChatGPT helps me draft emails or messages that explain complex issues clearly.
Example Scenario
While testing the login feature, I noticed some ambiguity about how many failed login attempts should be allowed before a user is locked out. I needed to clarify the login requirements with the client to ensure the feature worked as expected.
**Prompt Example**
_I’m testing the login feature and need to confirm the requirement for failed
login attempts before locking the user out. Can you help me draft an email
to the client to clarify this?_
**ChatGPT’s Answer:**
**Subject**: Clarification Needed on Login Attempt Limits
Hi [Client's Name],
I’m currently testing the login feature, and I’d like to confirm the
requirement for failed login attempts. How many incorrect attempts
should be allowed before the user is locked out, and how long should the
lockout period last?
This will help ensure the login behavior meets your expectations.
Thanks for your guidance!
Best regards,
[Your Name]
This email draft helps to communicate clearly with the client, ensuring that the requirements are well understood before further testing.
Conclusion
As you can see from these real-work scenarios, ChatGPT has become a valuable partner in my testing endeavors. It helps me quickly navigate unfamiliar topics, generate fresh test ideas, and efficiently handle documentation and communication, all while advancing my knowledge and problem-solving abilities.
What makes ChatGPT so useful is how seamlessly it integrates with my workflow. Rather than replacing the creativity and critical thinking that testing demands, it enhances my efficiency, allowing me to focus on finding bugs and improving the quality of the software. It’s like having an extra pair of hands - or a “rubber duck” - to help me think through complex testing scenarios, ensuring I can deliver high-quality results quickly and confidently.
In the future, AI tools like ChatGPT could become even smarter, helping testers by automatically generating test cases, predicting where problems might occur, or even simulating difficult scenarios like multiple users interacting at the same time. This would make testing faster and more thorough, allowing testers to catch even more issues before they happen.
A Little Behind-the-Scenes
And now, for a little behind-the-scenes secret: ChatGPT not only helps me with my testing - it even helped me write this article! As I was gathering my thoughts and examples, ChatGPT was right there with me, providing prompt ideas that were simple, relatable, and applicable to any project. It even checked my spelling (because let’s face it, we all need a little help there sometimes). So, while I’m the one who typed out most of these words, ChatGPT played a starring role in getting this article polished, organized, and ready to share with the world.