Skip to content

Tag: netbeans

Test Coverage

I’ve been looking a lot recently at JUnit (and TestNG) tests on a code base I’m not too familiar with. In many cases I was not convinced that the tests were adequate but it took a fair bit of investigation before I could be satisfied that this was the case. I would need to look at the tests, then look at the code it’s meant to exercise, then try to work out in my head if the test covers everything it should. To make this process a bit easier, I’ve started running code coverage analysis using Emma. While this doesn’t tell me if the test is good or not, it does show me at a glance how much code is covered by the test and exactly which lines, methods and classes are missed. This is usually a good first approximation for the quality of the test case.

I’ve found Emma to be a useful tool to run after I think I’ve written my test cases and got them working. Running the test case tells me if the code being tested works. Running Emma tells me if I’ve tested enough of the code. There’s no point in having 100% test case successes if the tests themselves only exercise 50% of the code.

Memory usage

A year or two back I was working on a web application which was expected to have moderate use – around 50 concurrent users. The product was generally getting thumbs up from our QA guys. It did everything we expected it to do. Then we had a go at testing under load.

Bang!

We found that if we had only a few users hammering the system for any length of time, the memory usage became unacceptable. Simple maths showed that the problem was to do with the number of open sessions. Each session required 20-30MB of memory from the app server. This is a piddly small amount when we have a handful of test users. It went completely unnoticed against the background noise of a typical server’s memory use. However, once just a hundred sessions have been opened (not necessarily at the same time) we’re chewing gigabytes at a time.