Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What do you do if you cannot resolve a bug? [closed]

Tags:

debugging

People also ask

What do you do when you can't reproduce a bug?

A few things that might help are:Clear all cache and cookies while performing the scenario. Watch and observe every step. Sometimes looking for similar bug or patterns can be helpful in reproducing a bug.

What is your process to find and fix bugs in an application?

#1) Understand the whole application or module in depth before starting the testing. #2) Prepare good Test Cases before starting with testing. I mean give stress on the functional test cases, which include the major risk of the application. #3) Create sufficient Test Data before tests.


Some things that help:

1) Take a break, approach the bug from a different angle.

2) Get more aggressive with tracing and logging.

3) Have another pair of eyes look at it.

4) A usual last resort is to figure out a way to make the bug irrelevant by changing the fundamental conditions in which it occurs

5) Smash and break things. (Stress relief only!)


I once worked for a company that sold a client-server application that was basically a file transfer and synchronization tool. Both the client and the server were custom applications we had designed.

We had a persistent bug that was very hard to duplicate in the lab. Our server could only handle a certain number of incoming client connections per box, so many of our customers would "cluster" multiple servers together to handle large user populations. The back end data for the cluster was on a file server they all shared. In this cluster configuration there was a bug that would happen under load where we would get a low-level file system error code on a file sharing call involving one of the back end files. Nobody could get this to repeat reliably in the lab, and even when they could they couldn't narrow down what was happening.

(I forget the exact error, it was probably 59 ERROR_UNEXP_NET_ERR or maybe 65 ERROR_NETWORK_ACCESS_DENIED. As I recall it was not even one of the documented error codes you were supposed to be able to get from the API we were calling, which was usually a lock or unlock call on a file section).

Since it involved the communication between the server and the back-end file store, and I was the "network transport" guy, I was tasked with looking at it. Many others had looked at it with no luck.

The one solid thing I had was I knew where in the code the error was being detected, but not what to do about it. So I needed to find the root cause. So I set up an appropriate hardware environment to duplicate it, and I put a custom build of the server software that instrumented the section of code in question.

The instrumentation was as follows: I added a test for the troublesome error code, and had it call a piece of code to send a UDP packet to a predetermined network address when the error occurred. The UDP packet contained a unique string in it to key on.

I then set a packet sniffing tool on the network. (At the time I was using Microsoft Network Monitor). I positioned it where it would be able to "see" this UDP packet when it was sent as well as all the communication between the cluster servers and the file server.

Most good sniffers have a mode where you can have it capture until it sees a particular piece of traffic, then stop. I turned on that mode and set it to look for that UDP packet my code would send. The goal was to end up with a packet capture of all the file server traffic right before the bug occurred. The very last network packets to and from the system where the UDP packet originated would presumably be a big clue as to what was happening.

I set the "stress test" configuration going and went home for the weekend.

When I got back on Monday, lo and behold I had my data. The sniffer had stopped just as expected after many hours of running and contained a capture. After studying the capture, what I found was that the Server Message Block or SMB (aka CIFS aka SAMBA) connection between our server and the file server was actually timing out at the TCP level due to extreme loading on the server. Because all of Microsoft's stuff is heavily layered, it would percolate back up through the file sharing stack as an "unexpected" error instead of returning a more intelligible error code that said "hey, you lost your connection at the TCP level".

I did a little more research on the TCP settings for Windows, and lo and behold the defaults for the version of Windows we were using (probably NT 4 in that era) were none too generous. It was only allowing for a very small number of failures on the TCP connection and boom, you were dead. Once you lost your SMB connection to the file server, all your file locks were toast and there was no way to recover.

So I ended up writing an appendix to the user manual that explained how to alter the TCP settings in Windows to make your cluster server a bit more tolerant of high load situations. And that was it. The fix to the bug was zero change in code, merely some additional documentation on how to properly configure the OS for use by this product.

What have we learned?

  • Be prepared to run altered versions of your code to investigate the problem
  • Consider using non-traditional tools to solve the problem (sniffers)
  • Not all bug fixes require code changes
  • Sometimes you can diagnose a bug while at home having a beer

I do a number of different things:

  • throw out all my assumptions and start from scratch. Remember, a bug exists because something which appears correct is actually wrong. Even those lines or functions or classes that you are absolutely certain are correct may be incorrect. Until you can convince yourself of the correctness you can't assume anything is right.

  • keep putting in print statements and assert statements to eliminate things and allow me to reform new assumptions.

  • step through code in the debugger if the problem is a control flow problem. Don't step over functions. Step in them and go through all the detail of their execution to confirm they are working right. Confirm the arguments and return values.

  • If a line or function or class is suspect but I can't prove it in situ, then write a small test case that does what you think the problem construct does. This may locate the problem or give some insights as to where to look next.

  • stop for the day. It's amazing what kind of offline processing your brain will do overnight. Often the answer or a key insight will appear the next day while I'm doing something mindless like showering or driving.


Create an automated way to cause the bug. The worst bug to fix is one that takes hours to reproduce.


Quote taken from "The Cryptonomicon":

"Intuition, like a flash of lightning, lasts only for a second. It generally comes when one is tormented by a difficult decipherment and when one reviews in his mind the fruitless experiments already tried. Suddenly the light breaks through and one finds after a few minutes what previous days of labor were unable to reveal."


I usually ask someone else to take a look at the code. While I'm explaining what the code is supposed to do, I sometimes see the bug just as I talk.

When a bug is a tough one, I sit and work until I figure it out and solve the problem. Interestingly enough, there are times when catching a mysterious bug is more enjoyable than everything running smoothly. And the relief and feeling when a bug is resolved, well, not many other things can beat that (except the obvious ones).


If all else fails, don't tackle it directly. Rewrite the problem area code in a more refactored way.