Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We would like to test a lot more but I really don't know how to test some of the critical stuff.

Just as an example, how do you test a parser that processes large amounts of sometimes sloppy semi structured text? Whether a particular defect should be classified as a bug in my parser or as a rare glitch in the source data is undecidable until I know how often the defect occurs.

What I need is a kind of heuristic test framework that makes sure the parser doesn't miss any large chunks that I only find out about weeks later if at all. I cannot supply individual test cases for everything that could possibly be found in the source data.



I cannot supply individual test cases for everything that could possibly be found in the source data.

Perhaps not, but you can supply test cases for known problems you might encounter, as well as ones you've solved after they've been encountered.


Yes, that's what I'm doing, but I feel it's a drop in the bucket.


Also don't forget the tests you add help you with the regression tests. The large set of tests would assure you that the new fix you do will not lead to any other bugs that you had fixed earlier.


It is, but as bug crop up you can add tests to ensure they don't crop up again. While it's not possible to ensure perfection, it does help ensure you don't 'revert back' to past problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: