Tag Archives: unit testing

How to programmatically ‘ping’ a port using Telnet

As part of a larger application, I needed to determine if particular services were running on remote servers – i.e. if a particular port on the server was accessible.

I wrote this utility class to perform this function, using the telent client supplied by Apache Commons-net.

import org.apache.commons.net.telnet.TelnetClient;
import org.apache.log4j.Logger;

import java.io.IOException;
import java.net.ConnectException;
import java.net.UnknownHostException;

/**
 * Execute a telnet connection to determine if the server and port are accessible.
 *
 * @author will
 */
public final class TelnetExecutor implements Executor {
    private static final Logger LOG = Logger.getLogger(TelnetExecutor.class);
    private static final int PORT_MIN = 0;
    private static final int PORT_MAX = 65535;

    private final String _server;
    private final int _port;

    public TelnetExecutor(final String server, final int port) {
        if (server == null || server.trim().length() == 0) {
            LOG.warn("Server name has a length of zero. Status result will fail.");
            _server = null;
        } else {
            _server = server;
        }

        if (port < PORT_MIN || port > PORT_MAX) {
            LOG.warn("Server port is out of bounds. Status result will fail.");
            _port = -1;
        } else {
            _port = port;
        }
    }

    /**
     * Determine the result of the port request.
     * <p/>
     * A return value of <code>Success</code> indicates successful connection, <code>Error</code> indicates a 
     * configuration problem, <code>Fail</code> indicates a failed connection, and <code>Unknown</code> indicates an 
     * unexpected problem.
     *
     * @return A {@link Status} representing the result.
     */
    @Override
    public Status getResult() {
        if (_server == null || _port < 0) {
            return Status.ERROR;
        }

        Status status;
        final TelnetClient telnetClient = new TelnetClient();
        try {
            telnetClient.connect(_server, _port);
            telnetClient.disconnect();
            status = Status.SUCCESS;
        } catch (ConnectException ce) {
            LOG.info("Could not connect to server '" + _server + "' _port " + _port);
            status = Status.FAIL;
        } catch (UnknownHostException e) {
            LOG.error("Unknown host: " + _server);
            status = Status.ERROR;
        } catch (IOException e) {
            LOG.error("Error connecting to server: " + _server + " - " + e.getMessage(), e);
            status = Status.UNKNOWN;
        }

        return status;
    }
}

The Status objects are an enumeration.

I can test this using variations on this [integration] JUnit testcase:

@Test
public void shouldConnectToExamplePort80() {
    Assert.assertSame(Status.SUCCESS, new TelnetExecutor("example.com", 80).getResult());
}

Unit Testing XML – Evaluating Diffs

I am trying to test code that merges two XML files. In the unit test that I am attempting to implement, I want to compare the difference between the merge result and one of the XML files (the larger of the two).

This is a description of the file contents:

  • The right-hand XML file has 13 elements underneath the root, while the left-hand file has 4.
  • Two elements in both files are equivalent, so the left-hand version is discarded.

What I’m expecting is that the two remaining elements in the left hand file are merged into the resultant XML, so that in reference to the right-hand file, the merged content has two additional elements underneath the root.

XMLUnit

I’ve used XMLUnit previously to compare generated XML with expected output. In this case however, I am more concerned with evaluating the differences between the source and resultant XML.

XMLUnit has a Diff object – org.custommonkey.xmlunit.Diff – which I realised, after some investigation, doesn’t quite offer a diff in the traditional UNIX diff/patch command sense. It evaluates a document in terms of being identical, similar or different, and holds a message describing the first difference encountered.

This has some use of course, my testcase could look something like this:

...
Diff myDiff = new Diff(originalXml, mergedXml);
assertFalse("Expected differences in XML", myDiff.identical());

The message contained in myDiff here is:

[different] Expected number of child nodes '13' but was '17' - comparing  at /Group[1] to  at /Group[1]

I’m not quite contented with that as a robust unit test. I want to ensure that the two files have specific differences. When I think of comparing two files with a diff, I’m picturing a visual diff:

..and the concept of a patch – that the set of +/- lines differences are collected and made available for inspection/verification.

XMLUnit does have some alternatives that, while not exactly what I’m looking for, I could use and are worth discussing:

DetailedDiff

DetailedDiff is an extension of Diff, which will give me a list of all the Differences in the comparison.

final DetailedDiff diff = new DetailedDiff(myDiff);
assertTrue("Expected a difference in child nodes", 
    diff.getAllDifferences()
        .contains(DifferenceConstants.CHILD_NODE_NOT_FOUND));

..will assert that the comparison has resulted in a mismatch in the number of children between the two XMLs (Javadoc). A few of those assertions could describe the expected differences between the XMLs.

Counting Nodes

CountingNodeTester is another alternative that allows us to assert the total number of elements contained in the XML:

CountingNodeTester countingNodeTester = 
    new CountingNodeTester(38);
assertNodeTestPasses(mergedXml, 
    countingNodeTester, Node.ELEMENT_NODE);

Or alternatively, comparing the counts of the two XMLs (7 additional nodes):

final int countOriginal = 31;
final int countMerged = countOriginal + 7;
CountingNodeTester countingNodeOriginal = 
    new CountingNodeTester(countOriginal);
CountingNodeTester countingNodeMerged = 
    new CountingNodeTester(countMerged);
assertNodeTestPasses(originalXml, 
    countingNodeOriginal, Node.ELEMENT_NODE);
assertNodeTestPasses(mergedXml, 
    countingNodeMerged, Node.ELEMENT_NODE);

XPath

Finally we could also use XPath evaluations to assert the existence or lack of certain structures:

assertXpathNotExists("/Group/PageContainer[6]", 
   mergedXml);
...
assertXpathExists("/Group/PageContainer/External-Group/File[@Location='/Sites/centre/dcita/site.xml']", 
    mergedXml);
assertXpathEvaluatesTo("/Sites/centre/dcita/site.xml", 
    "/Group/PageContainer/External-Group/File/@Location", mergedXml);

So while XMLUnit gives us a pretty good toolset for XML comparisons, I’m still wondering if there’s a more diff-oriented tool I could use.

java-diff-utils

I found java-diff-utils, which looks like it could be a good option for handling diffs the way I’m imagining. Lets have a go!

The sample code on the website shows us the basic usage:

// Compute diff. Get the Patch object. Patch is the container for computed deltas.
Patch patch = DiffUtils.diff(original, revised);

..where original and revised are List objects.

The Patch object gives us a list of Deltas, containing the ‘original’ and ‘revised’ segments. Perfect!

This matches what we have in the image (disregarding the empty elements formatted differently)

Assert.assertEquals(1, patch.getDeltas().size());

The Delta itself contains a list of text lines, so we could potentially verify the list of strings manually.

System.out.println(patch.getDelta(0).getRevised().getLines());

..outputs something like:

[<PageContainer>
, <External-Group>
, <File Location="/Sites/centre/dcita/site.xml">, </File>
, </External-Group>
, </PageContainer>
, <PageContainer>
, <Page Title="">
, <File Location="/Content/centre/dcita/index.xml">, </File>
, <Description>Migrated from previous CMS1 Homepage, </Description>
, </Page>
, </PageContainer>
    ]

A cleaner scenario could involve saving the expected delta text to a file (/src/test/resources/merged_xml.diff), and comparing the file contents to the lines in the actual patch delta

final String target = resourceToString("/merged_xml.diff");
final String actual = 
    listToString(patch.getDelta(0).getRevised().getLines());
Assert.assertEquals(target, actual);

This needs a couple of helper functions to load the diff file into a String, and convert the delta List into a String also.

    private String resourceToString(String filename) {
        StringBuffer lines = new StringBuffer();
        String line = "";
        try {
            BufferedReader in = new BufferedReader(
                new FileReader(getClass().getResource(filename).getPath()));
            while ((line = in.readLine()) != null) {
                lines.append(line);
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
        return lines.toString();
    }

    private String listToString(List list) {
        StringBuffer buff = new StringBuffer(list.size());
        for (Object o : list) {
            buff.append(((String) o).trim());
        }
        return buff.toString();
    }

Finally…

So in the end, my test case ends with combining the above diff utils snippets:

        ...
        Patch patch = DiffUtils.diff(original, revised);
        Assert.assertEquals(1, patch.getDeltas().size());

        final String target = resourceToString("/merged_xml.diff");
        final String actual = 
            listToString(patch.getDelta(0).getRevised().getLines());
        Assert.assertEquals(target, actual);

Unit testing JAXB marshalling and XJC-generated classes

JAXB – the Java Architecture for Xml Binding provides a simple way of mapping XML to POJOs. It give the ability of painlessly marshalling and unmarshalling objects to and from XML.

JAXB’s usefulness is enhanced by the ‘xjc’ tool that is included in the SDK, which converts an XML schema to a set of Java classes.

A portion of the XML schema (which itself is generated from an XML file):

  <xs:complexType name="MetaType">
      <xs:attribute type="xs:string" name="Name" use="optional"/>
      <xs:attribute type="xs:string" name="Scheme" use="optional"/>
      <xs:attribute type="xs:string" name="Value" use="optional"/>
  </xs:complexType>

Because the XML schema I’m working with has been auto-generated from sample XMLs and not hand-written (and fairly complex!), I’d like to ensure that the XML coming out of the marshalling is what I expect.

The following JUnit 4 test creates and populates the object, then verifies that the object is marshalled to XML properly:

public class MetaTypeTest {
    private final String _xmlHeader = "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>";

    @Test
    public void shouldMarshalAllAttributes() throws Exception {
        final MetaType type = new MetaType();
        type.setName("MetaName");
        type.setScheme("MetaScheme");
        type.setValue("MetaValue");

        // Can't be certain @XmlRootElement annotation has been generated, so wrap obj in JAXBElement
        final JAXBElement element = new JAXBElement(new QName("Meta"), MetaType.class, type);

        // Marshal to output stream
        JAXBContext context = JAXBContext.newInstance(MetaType.class);
        final ByteArrayOutputStream outStream = new ByteArrayOutputStream();
        context.createMarshaller().marshal(element, outStream);

        final String xmlContent = "";
        Assert.assertEquals(_xmlHeader + xmlContent, outStream.toString());
    }
}

More..

Loading custom-named BDD scenario files for JBehave

Now that I’m using JBehave in a commercial project, I’ve rewritten the loading of the scenario files in such a way that I can call my tests something like com.example.login.InvalidLoginScenario and have the corresponding scenario file under {project}/src/test/resources/invalid_login.scenario.

Previously…

The standard JBehave scenario file is loaded with UnderscoredCamelCaseResolver, which converts the classname from camel-case to underscore-seperated classname. A resource path is constructed from the package plus the underscored filename to locate the file – e.g. {src.test}/com/example/login/invalid_login_scenario.

Previously (with inspiration), I modified the testcase to override the default Configuration object, which allowed the loading of the scenario file with an extension – so it would now look for {src.test}/com/example/login/invalid_login_scenario.scenario.

Goal

To make the creation and maintenance of the JBehave scenarios and testcases easier, I decided on some standards:

  • JBehave testcase classes should be suffixed with ..Scenario, to clearly communicate their purpose.
  • Scenario filenames should map to their corresponding test classes.
  • Scenario files should reside in the same location under resources/.
  • Scenario files should have the extension .scenario, to improve readability.

But instead of having a file named invalid_login_scenario.scenario, I want the test class InvalidLoginScenario to map to the file invalid_login.scenario. All this was basically possible with existing JBehave classes, when configured the correct way (and certain functions overridden).

The Source

import org.jbehave.scenario.PropertyBasedConfiguration;
import org.jbehave.scenario.RunnableScenario;
import org.jbehave.scenario.errors.PendingErrorStrategy;
import org.jbehave.scenario.parser.ClasspathScenarioDefiner;
import org.jbehave.scenario.parser.PatternScenarioParser;
import org.jbehave.scenario.parser.ScenarioDefiner;
import org.jbehave.scenario.parser.UnderscoredCamelCaseResolver;

/**
 * Customisation of standard JBehave {@link PropertyBasedConfiguration} to allow clearer naming of scenario files:
 * <ul>
 * <li> Usage of *.scenario file extension</li>
 * <li> Strip 'Scenario' off test class names</li>
 * <li> Load scenario files from classpath root</li>
 * </ul>
 * So a test class named <code>InvalidUsernameScenario</code> would be attempting to resolve the resource path <code>/invalid_username.scenario</code>.
 *
 * This configuration also fails on 'pending' (unimplemented) steps.
 */
public final class ScenarioConfiguration extends PropertyBasedConfiguration {
    private final ClassLoader _classLoader;

    public ScenarioConfiguration(final ClassLoader classLoader) {
        _classLoader = classLoader;
    }

    @Override
    public ScenarioDefiner forDefiningScenarios() {
        final ResourceNameResolver filenameResolver = new ResourceNameResolver(".scenario");
        filenameResolver.removeFromClassname("Scenario");
        return new ClasspathScenarioDefiner(filenameResolver, new PatternScenarioParser(this.keywords()), _classLoader);
    }

    @Override
    public PendingErrorStrategy forPendingSteps() {
        return PendingErrorStrategy.FAILING;
    }

    /**
     * Override {@link UnderscoredCamelCaseResolver} to load resources from classpath root. This means we can collect
     * scenario files in a single resource directory instead of in packages.
     */
    class ResourceNameResolver extends UnderscoredCamelCaseResolver {
        public ResourceNameResolver(final String extension) {
            super(extension);
        }

        @java.lang.Override
        protected String resolveDirectoryName(final Class<? extends RunnableScenario> scenarioClass) {
            return "";
        }
    }
}

The function forDefiningScenarios() is the important part – it sets the resolver to use the .scenario extension, but also strips out ‘Scenario’ from the class name.

Also, to force the resolver to look at the classpath root, the resolveDirectoryName() function is overridden to return an empty string.

The testcases using this Configuration object would call:

public class InvalidLoginScenario extends Scenario {
    public InvalidLoginScenario() {
        super(new ScenarioConfiguration(InvalidLoginScenario.class.getClassLoader()), new LoginScenarioSteps());
    }
}

References

Building a JStack Parser

Background

Recently, I was briefed on a situation where a main public facing java web application was going down fairly regularly, for what appeared to be an out of memory error.

After an investigation by a previous developer turned up no concrete leads, they suggested using JStack to get a profile of what the threads in the application were holding locks that could be causing the crash.

Now I’ve never worked with JStack before, but the purpose seems fairly straightforward. Trouble is, with ~500 threads in the output, analysing each one to find a common cause would be a pretty intensive task.

I set about searching for some kind of output analyser for JStack, but with no luck, I decided to write my own to give me a summary of the results.

Requirements

At least for my application, there were a lot of similar-looking stacktraces, so it made sense to group and summarise the results.

The JStack output I was provided with looked something like this:

Attaching to process ID 18526, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 1.5.0_14-b03
Thread t@6872: (state = BLOCKED)
- java.lang.Thread.sleep(long) @bci=-1766132960 (Interpreted frame)
- java.lang.Thread.sleep(long) @bci=0 (Interpreted frame)
- sun.net.www.http.KeepAliveCache.run() @bci=3, line=149 (Interpreted frame)
- java.lang.Thread.run() @bci=11, line=595 (Interpreted frame)
Thread t@6871: (state = BLOCKED)
- java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be imprecise)
- java.util.TimerThread.mainLoop() @bci=201, line=509 (Compiled frame)
- java.util.TimerThread.run() @bci=1, line=462 (Interpreted frame)

What I initially wanted from my parser was to:

  • See the total number of thread states
  • Summary information for each stacktrace.

So what I need to do is extract each thread and its corresponding stacktrace to be able to process and report on it.

The Code

Firstly the main checks for a file parameter, then passes to the application instance which will check for its existence before calling a parser.

public class ParseJStack {
    public ParseJStack(final String filename) {
        File jstackFile = new File(filename);
        if (!jstackFile.exists()) {
            System.out.println("File does not exist. Exiting.");
            return;
        }
        //new Parser(jstackFile).process();
    }

    public static void main(String[] args) {
        if (args.length < 1) {
            System.out.println("Program requires a filename");
            System.exit(1);
        } else {
            new ParseJStack(args[0]);
        }
    }
}

For now, the call to the parser is commented out, as I’m yet to create it or determine its return type.

Test-driven

Now that I’m thinking about main functionality of the application, I’ll write testcases (JUnit 4) around the container classes JStackMeta and JStackEntry which are the thread detail and stacktrace objects respectively, and will be populated by the parser engine

JStackEntry will be a fairly simple container for a List of stack calls, and a String for its header text. From the looks of it the thread headers all follow the same pattern, so I will expect to be able to extract the state (and the id if we need to later) with a simple regex pattern.

public class JStackEntryTest {
    @Test
    public void shouldGetContentsAsNotNullWhenNullParameter() throws Exception {
        final JStackEntry stackEntry = new JStackEntry("");
        Assert.assertNotNull(stackEntry.getContents());
    }

    @Test
    public void shouldGetContents() throws Exception {
        final JStackEntry stackEntry = new JStackEntry("");
        stackEntry.append("Test Content");

        Assert.assertEquals(12, stackEntry.getContents().length());
        Assert.assertEquals("Test Content", stackEntry.getContents().toString());
    }

    @Test
    public void shouldGetEntryState() throws Exception {
        final JStackEntry stackEntry = new JStackEntry("Thread t@6872: (state = BLOCKED)");
        stackEntry.append("Test Content");

        Assert.assertEquals("BLOCKED", stackEntry.getState());
    }

    @Test
    public void shouldGetUnknownStateWhenHeaderEmpty() throws Exception {
        final JStackEntry stackEntry = new JStackEntry("");
        stackEntry.append("Test Content");

        Assert.assertEquals("UNKNOWN", stackEntry.getState());
    }
}

JStackMeta will hold the collection of JStackEntry objects, and the non-entry related meta (e.g. process id, java version info that appears at the top of the file).

public class JStackMetaTest {
    @Test
    public void testConstructorInit() throws Exception {
        Assert.assertNotNull(new JStackMeta().getHeader());
        Assert.assertNotNull(new JStackMeta().getEntries());
    }

    @Test
    public void testAppend() throws Exception {
        final JStackMeta stackMeta = new JStackMeta();
        stackMeta.append("Test Meta");
        Assert.assertEquals(9, stackMeta.getHeader().length());
    }

    @Test
    public void testAddEntry() throws Exception {
        final JStackMeta stackMeta = new JStackMeta();
        final JStackEntry stackEntry = new JStackEntry("");
        stackEntry.append("Test Entry");
        stackMeta.addEntry(stackEntry);
        Assert.assertEquals(1, stackMeta.getEntries().size());
    }
}

With those two classes set, we move back to the JStackParser which populates them. It’ll take the File handle provided by the application object, read in the contents and store them a JStackMeta and its JStackEntry objects. To test this class I created a test.jstack file (which is the same content as the example at the top of this post).

public class ParserTest {
    @Test
    public void testProcess() throws Exception {
        final URL resource = getClass().getResource("test.jstack");
        final File file = new File(resource.getFile());

        final JStackMeta stackMeta = new Parser(file).process();

        Assert.assertEquals(132, stackMeta.getHeader().length());
        Assert.assertEquals(2, stackMeta.getEntries().size());
    }
}

The actual parsing function looks like this (note that I’ve decided that it is important to keep the newlines in the stack trace to simplify the output):

    /**
     * Process the JStack output file and extract the data into a {@link JStackMeta} object.
     *
     * @return The {@link JStackMeta} object representing the JStack output.
     */
    public JStackMeta process() {
        _meta = new JStackMeta();
        try {
            FileInputStream inputStream;
            inputStream = new FileInputStream(_file);

            BufferedReader in = new BufferedReader(new InputStreamReader(inputStream));

            boolean finishedHeader = false;
            JStackEntry currentEntry = new JStackEntry("");

            String line;
            while ((line = in.readLine()) != null)
            {
                // Skip blanks
                if ("".equals(line.trim())) {
                    continue;
                }
                line += "\n";

                // Check if we're done with the header lines
                if (!finishedHeader && line.startsWith("Thread")) {
                    finishedHeader = true;
                }

                if (!finishedHeader) {
                    _meta.append(line);
                    continue;
                }

                if (line.startsWith("Thread")) {
                    currentEntry = new JStackEntry(line);
                    _meta.addEntry(currentEntry);
                } else {
                    currentEntry.append(line);
                }
            }

            in.close();
        } catch (FileNotFoundException e) {
            System.out.println("ERROR: File was not found");
        } catch (IOException e) {
            System.out.println("ERROR: A problem occurred");
            e.printStackTrace();
        }

        return _meta;
    }

Reporting

The last part of the application is a simple analysis of the JStackMeta object and reporting the statistics. As per the requirements, I want two parts to this, the totals for the different states and the single-line summary with counts.

For the purposes of a quick solution, I’m writing the output to the console. I’ll make a Report interface, just in case I get inspired and write some other implementation to report to a text or HTML file..

I’ll use two maps to keep track of the totals – one map for the status counts and one for the line summary totals. To keep the second part looking organised, I’ll use a TreeMap so that the entries remain sorted.

public interface Report {
    void buildReport(JStackMeta stackMeta);
}
public class ConsoleReport implements Report {
    public void buildReport(final JStackMeta stackMeta) {
        Map<String, Integer> stateCountMap = new HashMap<String, Integer>();
        Map<String, Integer> messageCountMap = new TreeMap<String, Integer>();  // Sorted

        // Report on results
        for (int i = 0; i < stackMeta.getEntries().size(); i++) {
            JStackEntry entry = stackMeta.getEntries().get(i);
            final String state = entry.getState();

            final Integer count;
            if (stateCountMap.containsKey(state)) {
                count = stateCountMap.get(state) + 1;
            } else {
                count = 1;
            }
            stateCountMap.put(state, count);

            final StringBuilder contents = entry.getContents();
            final String strStackEnd;
            if (contents.length() != 0) {
                strStackEnd = "(" + state + ") " + contents.substring(0, contents.indexOf("\n"));
            } else {
                strStackEnd = "(" + state + ") " + "[No stacktrace]";
            }

            final Integer countMessage;
            if (messageCountMap.containsKey(strStackEnd)) {
                countMessage = messageCountMap.get(strStackEnd) + 1;
            } else {
                countMessage = 1;
            }
            messageCountMap.put(strStackEnd, countMessage);
        }

        // State counts
        for (Map.Entry<String, Integer> entry : stateCountMap.entrySet()) {
            System.out.println(entry.getValue() + " threads at " + entry.getKey());
        }

        System.out.println("\n");

        // Message counts
        for (Map.Entry<String, Integer> entry : messageCountMap.entrySet()) {
            System.out.println(entry.getValue() + "\tthreads at " + entry.getKey());
        }
    }
}

(I may have gotten lazy here and neglected some unit test coverage- but with the report being written to console its fairly easy to eyeball bugs, right??)

With the Parser and report classes implemented, I can update the code in ParseJStack with a call to the parser and report:

...
        final JStackMeta stackMeta = new Parser(jstackFile).process();

        final Report report = new ConsoleReport();
        report.buildReport(stackMeta);
...

Finally

The end result of this analysis is being able to see an overview of the JStack results, which may give some indication of where to start looking for problems with the application.

39 threads at IN_NATIVE
503 threads at BLOCKED

1	threads at (BLOCKED)  - com.sun.appserv.util.cache.BaseCache.incrementMissCount() @bci=6, line=864 (Compiled frame; information may be imprecise)
3	threads at (BLOCKED)  - java.io.ByteArrayOutputStream.write(byte[], int, int) @bci=71, line=95 (Interpreted frame)
2	threads at (BLOCKED)  - java.io.FileInputStream.readBytes(byte[], int, int) @bci=0 (Compiled frame; information may be imprecise)
1	threads at (BLOCKED)  - java.lang.Object.hashCode() @bci=0 (Compiled frame; information may be imprecise)
4	threads at (BLOCKED)  - java.lang.Object.wait(long) @bci=-1749045981 (Interpreted frame)
195	threads at (BLOCKED)  - java.lang.Object.wait(long) @bci=-1749046076 (Interpreted frame)
    ...

As you can see we’re interested in seeing the counts for the end frames of the threads, and the state that they are in at this point.

This will at least give an opportunity to where the majority of the threads are ending up, and from this we might be able to gain some insight on what these threads are waiting on (e.g. by looking up stack traces via the byte code index in the original JStack log)

Next Steps

  • It may be of use to include a little bit more of the stacktrace in the comparison and output – I noticed that some taces would end on the frame with the same byte code index, but their full stacks would vary slightly.
  • As mentioned, it might be useful to write the report out to a file, such as a text or HTML file.

References

A brief introduction to Behaviour-Driven Development

IMPORTANT!
This blog has moved to http://blog.brasskazoo.com!

Behavior-Driven Development (BDD) is a methodology developed by agile developer Dan North in 2006.

It was created on top of an existing methodology named Test-Driven Development (TDD), a fairly widely known and discussed technique. Put simply, TDD specifies that simple test cases should be written first, and the developer then writes the smallest amount of code possible to make the unit tests pass.

Whilst TDD is firmly in the realm of the developer, BDD attempts to bridge the gap between developers, testers, business analysts, and other stakeholders; it is closer to technical specifications than to simply unit tested code.

By developing a consistent vocabulary across groups, we work towards eliminating miscommunication and ambiguity. And by putting the emphasis on behavior, we are more closely working with requirement specifications and the business value of the product’s function.

Stories

Like many agile processes, BDD employs the concept of ‘user stories’ in the form of a narrative, but with a specific format to them.

Story: [Title]

As a [role]
I want [feature]
So that [benefit]

This format clearly identifies the actor, the system feature and the business value or benefit of the story.

Each story also has acceptance criteria, in a format similar to this:

Scenario 1: [Title] 

Given [context]
  And [some more context]
  ...
 When [event]
 Then ensure [outcome]
 And ensure [another outcome]
 ...

The use of the word ensure identifies outcomes that are the responsibility of the scenario. Also using the word should in the scenario indicates the desired outcome could be affected by or reliant on another part of the system, and implicitly suggests a challenge to the assertion – “Should it? Should it really?”

Test Cases

Now we get to the core test-driven aspect – writing tests to cover each scenario. North encourages starting thinking of tests in a sentence starting with ‘should’ – as in “test (that the system) should [do something]” which converts to a test case titled ‘testShouldDoSomething’, making intention of each test clear, and we should be able to relate it to the acceptance criteria.

Now for an example…

In this simple example we have a ticket machine on a bus, where a trip duration is purchased by a passenger. The machine has a coin slot, an LCD to display the purchased time, a ‘print’ button, and a ticket printer (it does not have a coin return!).

Story: Purchasing a bus ticket

As a bus passenger
I want to purchase a bus ticket
So that I can board the bus

Acceptance Criteria

Scenario 1: Inserting initial coins
Given that the ticket machine is operating
 When coins are inserted
 Then ensure the purchased travel time is displayed on the LCD screen

Scenario 2: Inserting additional coins
Given that the ticket machine is operating
 And coins have been inserted
 When coins are inserted
 Then ensure the incremented travel time is displayed on the LCD screen 

Scenario 3: Printing the ticket
Given that the ticket machine is operating
 And coins have been inserted
 When the print button is pressed
 Then ensure ticket is printed with the purchased travel time
 And ensure that the LCD screen is cleared

We can see from the narrative the full extent of the story: The user, the system feature, and the value of the feature.

The acceptance criteria can then be transformed into unit tests:


// Scenario 1
public void testShouldSetTimeOnFirstCoinInsert() {...}

// Scenario 2
public void testShouldIncrementTimeOnCoinInsert() {...}

// Scenario 3
public void testShouldPrintTicketWithPurchasedTime() {...}
public void testShouldClearScreenAfterPrint() {...}
public void testShouldFailWhenPrintWithNoPurchase() {...}

Each test case verifies an outcome of the scenario, and for scenario 3 also verifies an error condition (i.e. when the print button is pressed and there has been no coins inserted).

With the test cases supporting the acceptance criteria, we’re affectingly applying test-driven development practices to business value, visible by stakeholders outside of the development team. Considering the difficulty many of us may have convincing non-technical people (and management) of the benefits of code quality strategies, to be able to actively engage them at higher levels with story narratives and plain English acceptance criteria is quite beneficial.

Consider these points

Summarised from Dan North’s blog

  • The story title should describe an activity
  • The story narrative should include a role, a feature and a benefit. – “As a [role] I want [feature] so that [benefit]
  • The scenario title should describe what’s different
  • The scenario should be expressed in terms of Givens, Events and Outcomes
  • The givens should define all of, and no more than, the required context
  • The event should describe the feature
  • The story should be small enough to fit in an iteration

References

[1] http://dannorth.net/introducing-bdd
[2] http://en.wikipedia.org/wiki/Test_Driven_Development
[3] http://dannorth.net/whats-in-a-story
http://behaviour-driven.org/Introduction
http://stackoverflow.com/questions/2509/what-are-the-primary-differences-between-tdd-and-bdd#2548
http://en.wikipedia.org/wiki/Behavior_Driven_Development