Contents

You're Testing the Wrong Things. And you do it on purpose!

Let me break it down for you.

“Make sure it works.” - That’s what most teams hear when someone mentions testing. So they test the case where everything goes right.

Then production happens.

And suddenly the “edge cases” aren’t edge cases anymore - they’re Friday eventing.

The Happy Path Trap

Your tests pass. Green checkmarks everywhere. You ship with confidence.

This is what you have neglected to test:

  • The API that times out after 30 seconds
  • The null value that shouldn’t exist but does
  • The concurrent requests that race for the same resource
  • The user clicks “submit” three times because the button didn’t respond
  • The request from someone who shouldn’t have access but found a way

Result: You tested the wrong thing, not the actual system.

What You Should Actually Test

Error Conditions and Exceptions

The happy path assumes everything works. Reality doesn’t.

Your tests should answer:

  • What happens when the database is down?
  • What happens when the device’s disk is full?
  • What happens when the payment gateway times out?
  • What happens when the third-party API returns a 500?

Your production environment will throw these at you. Your tests should have thrown them first.

Result: If you only test success, you ship code that can’t fail gracefully.

Boundary Conditions

Bugs live at the edge.

Test the edge cases:

  • Empty arrays and maximum-sized arrays
  • Zero, negative numbers, and integer sizes
  • Empty strings, maximum-length strings, Unicode characters
  • First day of month, last day of month, February 29th
  • Timezone boundaries, daylight saving transitions

Result: Boundary bugs don’t show up in demos. They show up in production when a user enters something “unusual.”

Invalid Inputs

Users don’t read your documentation. They don’t know your validation rules.

They will:

  • Click buttons in the “wrong order”
  • Submit forms with missing required fields
  • Refresh the page in the middle of the workflow
  • Restart the device during configuration because it got “frozen”

You can’t prevent all of that from happening, but you should handle it without breaking your solution.

Result: Invalid input is the real threat. It’s the user trying to do something. If your system can’t handle it, it’s not ready.

Race Conditions

This is where demos fail silently and production fails loudly.

Test these:

  • Two users updating the same record or device simultaneously
  • A payment is being processed while being cancelled
  • A resource is being created and deleted at the same time
  • A critical request arrives while the system is processing the previous one

The happy path assumes sequential operations. Reality doesn’t wait for its turn.

Result: Race conditions don’t appear in demos because demos are run by a single user who cares. Production does not.

Security Scenarios (Auth, Permissions)

Your app has roles. Admin. User. Guest. Service account. Each has permissions.

Your tests should answer:

  • Can a guest access admin endpoints?
  • Can user A modify user B’s data?
  • Can an expired token still access protected resources?
  • Can someone bypass your client-side validation and post directly to your API?

Result: Security bugs don’t show up in demos because you do not try hard enough. Attackers will.

The Bottom Line

Testing isn’t about proving it works.

Testing is about proving where and how it breaks — before your users do.

If your tests only cover the happy path, you’re not testing. You’re ticking the boxes.

A Real Testing Checklist

Next time someone says “test it,” run through this:

Category What to Test
HappyPath Every valid workflow end-to-end
Errors Database down, API timeout, payment failure
Boundaries Empty, zero, max, negative, overflow
Invalid Input Wrong types, missing fields, injection attempts
Race Conditions Concurrent updates, simultaneous creates/deletes
Security Role bypass, expired tokens, permission escalation

What I Do

I help teams find the gaps in their testing strategy before production finds them.

If your tests pass but your system still breaks, we should talk. There’s probably a landmine you haven’t stepped on yet.

👉 Drop me a message.

Join the Industrial IoT Briefing, get strategic insights on architecture, hardware scaling, and operational resilience. (by subscribing you accept the privacy policy)