Penetration Testing Basics Every FinTech Engineer Should Know
Pentesting isn't just the security team's job. After watching three different payment systems get findings that engineers could have caught early, I'm convinced every FinTech developer needs at least a working knowledge of how attackers think. Here's what I wish someone had told me earlier.
Why This Matters for Every Engineer
I used to think pentesting was something the security team handled after we shipped. Then a pentest firm found an IDOR vulnerability in a transfer endpoint I'd written. The fix took twenty minutes. But the finding delayed our launch by two weeks because of the remediation process, re-testing, and compliance sign-off.
That was the moment I realized: if I'd known what to look for, I would have caught it in code review. The security team can't review every pull request. They're outnumbered. But if every engineer on the team can spot the top ten patterns attackers exploit, you've multiplied your security coverage without hiring anyone.
The Pentest Lifecycle
Before diving into specifics, it helps to understand how a professional pentest actually works. Whether you're running tests internally or working with an external firm, the process follows the same five phases.
The scoping phase is where most teams mess up. If you don't clearly define which endpoints, environments, and user roles are in scope, you'll either miss critical areas or waste time testing things that don't matter. For payment systems, always include: the transaction API, authentication flows, webhook receivers, and any admin panels.
OWASP Top 10: The Payment System Edition
The OWASP Top 10 is the standard reference, but not all vulnerabilities hit payment systems equally. After being involved in pentests across multiple FinTech products, these are the ones that come up over and over.
IDOR is the one I see most often in payment systems. It's deceptively simple — you just change an ID in the URL or request body and see if you get someone else's data. But in a payment context, that means viewing other people's transaction history, account balances, or personal information. Every endpoint that takes a resource ID needs server-side ownership validation. No exceptions.
Tip: Create a simple test matrix: list every API endpoint, note which ones accept resource IDs, and verify each one checks that the authenticated user actually owns that resource. This alone catches the majority of IDOR bugs.
Attack Vectors Specific to Payment APIs
Beyond the OWASP Top 10, payment systems have their own unique attack surface. These are the patterns I've seen exploited (or nearly exploited) in real engagements.
Parameter tampering on amounts
This one sounds too obvious to be real, but I've seen it. An attacker intercepts a payment request and changes the amount field from 99.99 to 0.01. If your server trusts the client-submitted amount instead of looking it up from the order, you've just sold something for a penny. Always derive the amount server-side from the order or cart state.
Race conditions on balance checks
This is subtle and hard to catch in code review. The flow looks like: check balance → debit account → credit recipient. If those steps aren't atomic, an attacker can fire off multiple transfer requests simultaneously. Each one passes the balance check before any debit is applied. I've seen this result in negative balances and real financial loss.
# Vulnerable pattern — check-then-act without locking
balance = get_balance(user_id)
if balance >= amount:
debit(user_id, amount) # Race window here
credit(recipient_id, amount)
# Fixed — use database-level locking
BEGIN;
SELECT balance FROM accounts WHERE id = $1 FOR UPDATE;
-- Now the row is locked until COMMIT
UPDATE accounts SET balance = balance - $2 WHERE id = $1 AND balance >= $2;
COMMIT;
Webhook spoofing
Payment providers send webhooks to confirm transactions. If you don't verify the webhook signature, an attacker can send fake "payment successful" notifications to your endpoint and get goods or services without paying. Always validate the signature using the provider's shared secret, and verify the event by calling back to the provider's API.
Warning: Never rely solely on webhook data to fulfill orders. Always verify the payment status with a server-to-server API call to the payment provider. Webhooks can be replayed, spoofed, or arrive out of order.
Tools Every FinTech Engineer Should Know
You don't need to become a full-time pentester. But knowing your way around a few key tools will change how you think about the code you write.
Burp Suite
This is the industry standard for web application testing. The Community Edition is free and covers most of what you need. Set it up as a proxy, browse your application, and Burp captures every request. You can then modify parameters, replay requests with different auth tokens, and test for IDORs in minutes. The Repeater tab alone is worth learning — it lets you tweak and resend individual requests without touching the browser.
OWASP ZAP
ZAP is the open-source alternative to Burp, and it's excellent for automated scanning. I use it in CI pipelines (more on that below) because it has a solid CLI and Docker image. It won't catch everything a manual tester would, but it reliably flags missing security headers, reflected XSS, and basic injection points.
sqlmap
If you suspect a SQL injection vulnerability, sqlmap will confirm it and show you exactly how bad it is. Point it at a URL with a parameter, and it'll cycle through injection techniques automatically. Use it only against your own systems — this tool is powerful enough to dump entire databases.
# Basic sqlmap usage against a test endpoint
sqlmap -u "https://staging.example.com/api/search?q=test" \
--headers="Authorization: Bearer YOUR_TOKEN" \
--level=3 --risk=2 --batch
# ZAP baseline scan in CI
docker run -t ghcr.io/zaproxy/zaproxy:stable zap-baseline.py \
-t https://staging.example.com \
-r report.html
Tip: Set up Burp Suite with your staging environment and spend 30 minutes just browsing your app through the proxy. Look at the requests in the HTTP History tab. You'll be surprised how much you learn about your own API surface just by watching the traffic.
Security Testing in Your CI/CD Pipeline
Manual pentesting is essential, but it happens quarterly at best. You need automated checks running on every pull request. Here's a practical setup that won't slow your pipeline to a crawl.
# .github/workflows/security.yml
name: Security Checks
on: [pull_request]
jobs:
dependency-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run dependency audit
run: npm audit --audit-level=high
sast:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Semgrep
uses: semgrep/semgrep-action@v1
with:
config: p/owasp-top-ten
dast:
runs-on: ubuntu-latest
needs: [deploy-staging]
steps:
- name: ZAP Baseline Scan
uses: zaproxy/action-baseline@v0.12.0
with:
target: 'https://staging.example.com'
rules_file_name: 'zap-rules.tsv'
The key insight: run SAST (static analysis) on every PR — it's fast and catches hardcoded secrets, injection patterns, and insecure configurations. Run DAST (dynamic analysis with ZAP) against staging after deployment — it's slower but catches runtime issues like missing headers and reflected XSS. Don't try to run DAST on every PR; it'll kill your velocity.
Warning: Never run automated security scanners against production. Use a staging environment that mirrors production. Scanners can generate thousands of requests, trigger rate limits, and in some cases corrupt data. Always test against an isolated environment.
Working with External Pentest Firms
PCI DSS requires annual penetration testing by a qualified assessor. Even if you're doing internal testing, you'll eventually work with an external firm. Here's what to expect and how to prepare.
First, they'll ask for a scoping document. Have this ready: a list of in-scope URLs, IP ranges, user roles (with test credentials), and any areas that are off-limits (like third-party integrations you don't own). The more prepared you are, the less time they spend on recon and the more time they spend finding real issues.
Second, expect the engagement to take two to four weeks for a typical payment application. You'll get a draft report, have a chance to dispute false positives, and then receive the final report. Critical findings usually come with a 30-day remediation window.
Third — and this is the part nobody tells you — prepare your team emotionally. A pentest report full of findings feels like a personal attack on your code. It's not. Every application has vulnerabilities. The point is to find them before someone malicious does. I've learned to read pentest reports as a gift: someone just told you exactly where your weaknesses are.
Real Vulnerabilities I've Seen
These are anonymized, but they're all real findings from payment system pentests I've been involved in.
- The forgotten admin endpoint. A
/admin/usersendpoint was built during development for debugging. It had no authentication. It returned full user records including hashed passwords and KYC documents. It was still live in production six months after launch. The fix was one line of middleware — but the finding was rated Critical. - The predictable transaction ID. Transaction IDs were sequential integers. An attacker could enumerate every transaction in the system by incrementing the ID. Combined with a missing authorization check, they could view any user's payment history. We switched to UUIDs and added ownership validation.
- The race condition on promotional credits. A "refer a friend" feature credited both users when a referral signed up. By sending the signup request multiple times in rapid succession (within milliseconds), an attacker could trigger the credit multiple times before the deduplication check ran. The fix required a database-level unique constraint and row locking.
- The webhook without signature verification. A payment confirmation webhook accepted any POST request with the right JSON structure. No signature check, no IP allowlisting. An attacker could mark any order as paid by sending a crafted request. This one still keeps me up at night.
Every single one of these could have been caught by an engineer who knew what to look for. That's the whole point of this article.
References
- OWASP Top 10 — Web Application Security Risks
- OWASP Web Security Testing Guide (WSTG)
- OWASP Transaction Authorization Cheat Sheet
- Burp Suite Documentation — PortSwigger
- OWASP ZAP — Official Documentation
- sqlmap — Automatic SQL Injection Tool
- Semgrep — Static Analysis Documentation
- PCI Security Standards Council — Document Library
Disclaimer: This article is for educational purposes and covers defensive security and authorized testing only. Always obtain written authorization before testing any system you do not own. The author's views are personal and do not represent any employer. Product names and brands are property of their respective owners. Security recommendations should be validated against your specific compliance requirements and threat model.