Skip to content

Category: Testing

Web service testing with soapUI

In my previous post regarding Spring-WS and Security I didn’t mention anything about testing the resulting SOAP service. Particularly when it comes to secure services, it’s vitally important to test. First, we want to make sure that the service is functionally correct – that it returns the correct results. Second, we want to make sure it is secure – that it refuses service to any request that does not meet our security requirements.

With regard to how we test, it’s simplest to use some SOAP editor tool that lets us fiddle with the request and press a button to retest instantly. But ideally we want some programmatic test that can be included in the test phase of our build.

This post describes testing the now legendary Spanners WS demo with the following requirements:

  1. Tests must be functional – they test what the webservice does
  2. Security is tested
  3. Tests can be tweaked and rerun instantly
  4. Tests can be included in build process

The updated source of the Spanners WS demo including the tests described here is available to download.

Rounded corners in CSS / IE Tester

Until Internet Explorer 8 is finally retired we still have to dick about with CSS to make IE behave properly. I’m not a CSS hacker but this is one trick that I suspect I’ll need again at least until IE9 becomes standard.

CSS3 includes a property for rounded corners which was (sort of) adopted in Firefox, Chrome and Safari some time ago. I don’t use it on this site – someone else did the hard work there using images for the corners. Presumably because CSS3 support was so poor at the time. This new CSS3 property can however be retrofitted to old browsers with a little work.

Test Coverage

I’ve been looking a lot recently at JUnit (and TestNG) tests on a code base I’m not too familiar with. In many cases I was not convinced that the tests were adequate but it took a fair bit of investigation before I could be satisfied that this was the case. I would need to look at the tests, then look at the code it’s meant to exercise, then try to work out in my head if the test covers everything it should. To make this process a bit easier, I’ve started running code coverage analysis using Emma. While this doesn’t tell me if the test is good or not, it does show me at a glance how much code is covered by the test and exactly which lines, methods and classes are missed. This is usually a good first approximation for the quality of the test case.

I’ve found Emma to be a useful tool to run after I think I’ve written my test cases and got them working. Running the test case tells me if the code being tested works. Running Emma tells me if I’ve tested enough of the code. There’s no point in having 100% test case successes if the tests themselves only exercise 50% of the code.

DbUnit

I’ve decided to revisit the JUnit testing Hibernate and Spring recipe that I posted a while back. A problem with the previous recipe is that it did not provide any means to initialize the test database. This wasn’t too much of a problem as I was mostly testing the data insert operations of the DAOs. I then used the same DAO to retrieve the newly inserted data and tested what came back. However this is no good if I don’t want insert operations on my DAO (if it’s to retrieve read only data from the database) or if I want to test the retrieval operations independently of the insert operations.

This post extends the recipe to include a means of initialising the database using DbUnit.

Memory usage

A year or two back I was working on a web application which was expected to have moderate use – around 50 concurrent users. The product was generally getting thumbs up from our QA guys. It did everything we expected it to do. Then we had a go at testing under load.

Bang!

We found that if we had only a few users hammering the system for any length of time, the memory usage became unacceptable. Simple maths showed that the problem was to do with the number of open sessions. Each session required 20-30MB of memory from the app server. This is a piddly small amount when we have a handful of test users. It went completely unnoticed against the background noise of a typical server’s memory use. However, once just a hundred sessions have been opened (not necessarily at the same time) we’re chewing gigabytes at a time.

Bugs part 3: Don’t shoot the messenger

I’ve spent the last couple of posts discussing developers’ attitudes to bug tracking. More often than not, it could be better. In this third and final post on how developers deal with bugs, I’ll discuss how we might improve our attitudes. Forgive me, but just this once I want to go all hand-wavey and go on about positive attitudes, constructive criticism and so on.

Would it come as a surprise to you if I said that developers see bugs in a negative light? Yes, when we see other people criticising our software, telling us that it’s not good enough, saying “no mate, it’s broke”, we take it as a personal insult. I can’t imagine why.

Bugs part 2: Towards perfection

There is an expectation placed on developers – and pretty much everyone else – that they must strive towards perfection. Failure is not an option. There’s a general attitude anyone who makes mistakes is not good enough. When mistakes are made, heads should roll.

There’s a nasty cycle where mistakes are ignored or at least forgotten about. First, we all assume that we are good at what we do. When mistakes (inevitably) happen, we assume therefore that they’re someone else’s fault. If the system does not behave the way the customer expects it to, we assume it was poorly specified. If a bug makes it to production, we assume that the tester failed. Even if it’s decided that the bug is our fault and our fault alone, we assume it was a one-off mistake – a bad code day or whatever. Or else assume that it’s the sort of mistake that we’d have made a year or two ago but we’re too experienced to let that sort of thing happen again. Either way, we assume it won’t happen again. Because we’re good at what we do.

The big problem is that we assume that we’re perfect. We’re not. There’s a simple test: Have you ever made a mistake? If the answer’s yes, you’re not perfect. If the answer’s no, you’re a liar. And not perfect.

Bugs part 1: Don’t mention the war

My wife – a doctor – has been looking recently at safety systems in the aviation industry and their applications to the health service. In particular, how errors are reported and followed up. Apparently, NASA’s Aviation Safety Reporting System (ASRS) is the way to go. Every industry wanting to improve safety and reduce errors looks to NASA and ASRS.

Software is usually not quite as safety critical as aviation or health but one of the key targets for a software business is quality. Even if people would not die as a result of buggy software, we usually want high quality bug free software as the long term cost is lower and so profits are higher. Yet I often find the general attitude to bugs and mistakes is all wrong. The standard attitude is:

My boss wants bug-free software. Bugs make my boss unhappy. Therefore, my boss will be happier if he doesn’t know about the bugs.

It’s slightly simplistic perhaps, but who can honestly say that they’ve discovered some obscure bug in the system and felt absolutely happy about bringing it to the attention of management or other developers? Have you never considered just conveniently forgetting all about it?

JUnit testing Hibernate and Spring

Here’s a nice recipe for unit testing Spring configured Hibernate. It allows me to neatly test my Spring configured DAOs and reuse a lot of the Hibernate Session and Transaction configuration beans from my production code. This saves having to rewrite it for my tests and also makes the tests more realistic. I’d rather test my production code rather than use mocks as far as possible.