ModeShape

An open-source, federated content repository

Git and SVN: the beginning

My previous post about Git and SVN hopefully intrigued you enough to want more. Yes, you can use Git locally even though your stuck with using Subversion for the central repository. In this post I’ll show you how to get started, while in the next post we’ll see how the Git Eclipse plug-in.

1. Install Git

You can install it several different ways, depending on your platform and your preference. I have a Mac, so I could download and compile it using MacPorts, or I could grab the latest installer. I like simple, so since I don’t yet have MacPorts I went for the installer. Piece of cake. I also chose to manually set up my path, so in my .bashrc file I added these statements:

export MANPATH=${MANPATH}:/usr/local/git/manexport PATH=${PATH}:/usr/local/git/bin:/usr/local/git/libexec/git-core/

(Okay, I actually used some variables in my real .bashrc to keep things DRY. I thought the above might cut through all that noise.) Note that we added “git/libexex/git-core/” to the PATH. This directory contains all the commands that start with “git-” (e.g., “git-svn …” is really the same as running “git svn …”).

2. Clone a remote SVN repository

The next step for me was creating a local git repository that mirrored the remote SVN repository.

$ git svn clone -s http://example.com/my_subversion_repo local_dir

The “-s” option tells git that the Subversion repository use the “standard” naming convention of using “trunk”, “branches”, and “tags” directories. When this command runs, it creates a subdirectory called “local_dir”, initializes the git repository in that “local_dir”, connects to the SVN repository at the URL (which does not include “trunk” or any other branch/tag information), proceeds to download all of the history (including branches and tags) in the SVN repository, does some cleanup, and checks out the equivalent of the SVN “trunk” to our working area. And, because of Git’s bridge with SVN, we’ll be able to regularly pull changes from SVN into our git repository, and we can even upload changes we make locally back into SVN.

We now have our new Git repository, so cd into it:

$ cd local_dir

If you look closely, you’ll see a couple of things. First, Git doesn’t proliferate your working area with “.svn” folders. Instead, there’s just one “.git” directory at the top. Git also uses a different technique than SVN for remembering which files and directories should be ignored. But we can bootstrap Git’s ignore information automatically from SVN’s. We’ll use the “.gitignore” file in our working area, since we want this information to be versioned and we want everyone using the repository to be able to use it. If your SVN repository already has a “.gitignore” file, then someone’s done the work for you. Otherwise, you’ll have to run the command to generate the file:

$ git svn show-ignore > .gitignore

This will take a minute or two, depending upon the size of your working area. I would then commit this new file, placing it in the master branch.

3. Committing (locally)

To commit our new file, we first have to tell Git that we want to start tracking this file. Or, in Git parlance, we want to stage the file by adding it to the index:

$ git add .gitignore

We can use the status command to see a summary of what’s already staged, what’s being tracked but hasn’t been staged, and what’s not being tracked at all:

$ git status .

We can also get a more detail report showing all the individual changes that have been staged:

$ git diff

We can then commit our staged changes:

$ git commit -m "Description of commit" .

or, if we want to automatically stage any modified or deleted files, we can add use the “–all” (or “-a”) option:

$ git commit -a -m "Description of commit" .

With the “commit” command, Git records the entire set of staged changes as a single commit to our current branch, and it moves the branch’s HEAD pointer to this last commit. Remember, Git works locally, so this commit is recorded locally. This may sound strange at first, but really it allows you to commit (locally) much more often, which can improve your development process since backing out things that don’t work is really easy.

To see the history of commits, use the “log” command

$ git log

This displays each time, message, and SHA-1 hash (the unique identifier) for the last n commits. You can use these hashes (or the first 4 characters or so) in the diff command.

$ git diff <hash>

displays the difference between a previous commit and changes, while:

$ git diff <hash1> <hash2>

displays the difference between two previous commits.

There are a lot of things Git can do – way too many to cover here. But look at “git bisect”, “git grep”, “git revert” jut to name a few. But let’s get back to our workflow.

4. Updating from Subversion

If you’re familiar with Subversion, you’re hopefully used to doing “svn update” before committing your changes. This pulls any revisions that others have made since you last updated, and its good form to do this and to make sure everything compiles and runs locally before committing.

In Git, you do this with the “git svn rebase” command. This command fetches revisions from the SVN used by the current HEAD (of the current branch), and “rebases” any current commits on the branch to apply to these latest SVN changes. This is analogous to creating a patch file for each local commit (relative to that commit’s parent) since the last rebase, updating the branch with the new SVN revisions, then sequentially applying the patches. In other words, it takes all your local changes (in the form of commits) and reapplies them to the latest SVN revisions.

Here’s the actually command to rebase the current branch:

$ git svn rebase

If you want to fetch all of the revisions on all branches in the SVN repository and rebase any local commits on those branches, you can add the “–fetch-all” option.

5. Committing back to Subversion

Once you have rebased your local commits on a branch, you will still have changes to that branch that aren’t yet in Subversion. We want to do the equivalent of an SVN commit, which in Git is to commit each diff on the branch back to SVN:

$ git svn dcommit

This creates a new revision in SVN for each local commit on the branch. Of course, if you want them to look like a single revision, you’d need to squash the commits before running the dcommit command.

Conclusion

This post focused mostly on how to get started with Git using a remote Subversion repository, so I didn’t talk much about how to go about branching and merging. I think you’ll agree, though, that Git’s Subversion bridge seems very intuitive, and in fact many of the same concepts used by the Subversion bridge are the same concepts used by the rest of Git. And once again I’m going to suggest reading the Git Community Book or watching Bart Trojanowski’s presentation.

Note: Portions of this post were taken from a previous post on my personal blog.

Filed under: techniques, tools

Git it together with Subversion

This is the first in a series of posts about how I use Git to work on JBoss DNA. Like many other JBoss.org projects, all of the code for DNA lives in a central Subversion repository. And for a long time, I naturally used a combination of the Subversion command line client and the Subversive Eclipse plug-in.

Git works a little differently than SVN. Git is a distributed version control system, and that means that you have a local copy of your project’s Git repository. Yup, the whole thing. But that’s a game changer, because you can commit, branch, view history, or whatever while your disconnected from the web. And it’s fast (since everything is done locally) and surprisingly space-efficient. Then, when you’re ready, you can push or pull changes to other Git repositories.

Git also works really well pushing and pulling from a remote SVN repository. In fact it’s built right into Git. So if your remote repository is SVN, you can still use Git locally. The only difference is how you push to and pull from the remote repository.

What really convinced me to try Git, though, was how brilliantly it does branching. Actually, branching and merging. Quite a few source control management systems can branch, but the problem is they don’t merge – or if they do, they don’t do it well. Git, on the other hand, makes merging easy. In fact, it’s really easy to create branches, switch between branches, and merge branches.

So why am I so hung up on branches? Well, being able to branch and merge means that I can use branches for everything – bug fixes, new feature work, prototypes, or maintenance. I can create a branch where I do all my work, and I can commit as many times as I want. I can even use tags to help me remember important commits (e.g., “all tests pass”). And when my work is complete, I can merge my changes onto as many branches as I need to. All of this is done locally, without polluting the main repository’s trunk or branches. Of course, if I’m working with other people on a feature or bug (e.g., “please review these changes and provide feedback”), I could expose my own repository or we can leverage a shared repository. And when the work is all complete, merging onto the official branch(es) is cake, too.

In forthcoming posts, I’ll show how to install and use Git with a central Subversion repository, describe some of the tooling I use with Git, and outline a number of very common use cases: simple branching and merging, stashing your work to fix a bug, and so on.

Until then, if you want to learn more about how to use Git, watch Bart Trojanowski’s presentation. It is 2 hours long, but it really is a brilliant tutorial if you’re interested in learning the fundamentals how Git works and how to use Git.

Portions of this post were taken from a previous post on my personal blog.

Filed under: techniques, tools

AtomPub, REST, repositories … and Starbucks?

InfoQ has a great article on how REST and AtomPub work, using a “Starbucks” example. (Via Joel Amoussou.)

One thing this article does well is explain how representations can contain URIs for followup activities (e.g., what can be done next with a resource). In addition to the “next” used in the article, activities might include undoing the last change, or “publishing” some artifact. This works extremely well when coupled with some sort of state transition (process) engine on the back end and allows clients to “discover” the process, even when that process changes! And note that the process might include lifecycle changes and/or workflow.

Now, how does REST and AtomPub relate to JBoss DNA and JCR repositories? Well, it’s always useful to have a REST API into a JCR repository. For the most part, I think a generic repository library like JBoss DNA will provide a generic REST interface. After all, working with a JCR repository doesn’t have a lot of process – it’s mostly just simple get/post/put/delete operations on the nodes (resources). But AtomPub only defines three levels (services, collections, and entries), and this doesn’t really map well to a hierarchical repository. (Yes, I could conceive of several different mappings, but the question is whether any of them really are useful.)

So while a content repository system might not provide a generic AtomPub, a domain that uses a JCR repository may very well want to use AtomPub to allow clients to work with the domain’s content. Consider a SOA governance repository, which may have multiple collections (e.g., “services”, “published services”, “news”, “policies”, etc.), and workflow (e.g., uploading a service, publishing a service, etc.). Here, AtomPub can be mapped to the SOA governance domain pretty easily, but this mapping is probably different than other domains.

It would be nice if JBoss DNA could have a generic “AtomPub” facility that is content-driven: specify the collections (maybe defined as queries), map the entities to workflow processes, etc. But that’s getting ahead of ourselves. First things first.

Filed under: rest, techniques, tools

More on federated repositories

Interesting post by Mark Little, which talks about the benefits of a federated repository and compares it with the way in which the human brain stores stuff. Cool stuff. DNS works in a similar way. Now throw in how Git works, and things get really interesting.

I really hope that this is where JBoss DNA federation is heading. Go to your DNA federated repository, and let it get the information from the system that has it. Obviously that’s not the complete federation story, so we have more to do. For example, we need more connectors so that a DNA federated repository can access information from other DNA or JCR repositories (in addition to other systems).

But I think that our core architecture and caching strategies will let us grow into these much larger and much more interesting scenarios.

Filed under: features, federation, repository, techniques

To check or not to check (parameters) – Part Deux

In one of my recent posts, I talked about checking method parameters using a utility that throws IllegalArgumentException when a parameter was deemed invalid. The whole point of this check is to prevent a more convoluted exception from certainly happening later on, and using the IllegalArgumentException to give the developer some useful message about what’s allowed or not allowed.

The post generated some good comments. For example, my post didn’t really make too clear that checking method parameters is not a replacement for checking the validity of input data (e.g., data provided by a user, say via a form, or data provided to a database). There are a lot of neat frameworks for doing that, including Commons Validator and Hibernate Validator to name two.

I wanted to follow up on my previous discussion, though, and discuss checking method parameters using Java assertions. In general, this is a bad idea. For one thing, checking arguments with a static utility method (e.g., ArgCheck.isNotNull(...)) should be very fast. Second, assertions can be disabled, and doing so shouldn’t change the behavior of the code. (This is why I’d argue that continuous integration should be run with and without assertions.)

The major requirement for using assert is that removing the assertion should not change the flow of the code – called control flow invariance. This means that using assert for checking parameters

Filed under: techniques

Internationalization in JBoss DNA

It’s pretty important to be able to create applications that support multiple languages, and most libraries should provide some kind of support for this. The first step is making your code support internationalization, but then you need to localize the application for each language (or locale). We’ve included internationalization (or “i18n”) in JBoss DNA from the beginning, but we haven’t done much with localization (or “L10n”), and have only one (default) localization in English.

Java really does a crappy job at supporting internationalization. Sure, it has great Unicode support, and it does provide a standard mechanism for identifying locales and looking up bundles given a locale. But where is the standard approach for representing an internationalized message ready for localization into any locale? ResourceBundle.getString()? Seriously?

What I want is something analogous to an internationalized String capable of holding onto the replacement parameters. Each internationalized string should be associated with the key used in the resource bundles. I want to localize an internationalized string into the default locale, or into whatever locale you supply, and even into multiple locales (after all, web applications don’t support just one locale). And I should be able to use my IDE to find where each internationalized string is used. I should be able to test that my localization files contain localized messages for each of the internationalized strings used in the code, and that there are no duplicate or obsolete entries in the files. I also don’t want that many resource files (one per package – like Eclipse used to do – sucks); one per Maven project is just about right.

I’m not asking for much.

Meet the players

There are quite a few existing internationalization (aka, “i18n”) open source libraries, including JI18n, J18n, Apache Commons I18n, just to name a few. Too many of these try to be too smart and do too much. (Like automatically localizing a message identified by a Java annotation into the current locale, or using aspects to do things automatically.) This stuff just tends to confuse IDE dependency, search, and/or debuggers. We found nothing we liked, and lots of things we didn’t like. Internationalization shouldn’t be this hard.

Sweet and to the point
So we did what we don’t like to do: we invented our own very simple framework. And by simple, I mean there’s only one I18n class that represents an internationalized string with some static utility methods (and an abstract JUnit test class; see below). To use, simply create an “internationalization class” that contains a static I18n instances for the messages in a bundle, and then create a resource bundle properties file for each of these classes. That’s it!

So, let’s assume that we have a Maven project and we want to create an internationalization class that represents the internationalized strings for that project. (We could create as many as we want, but one is the simplest.) Here’s the code:

public final class DnaSubprojectI18n {

// These are the internationalized strings ...
public static I18n propertyIsRequired;
public static I18n nodeDoesNotExistAtPath;
public static I18n errorRemovingNodeFromCache;

static {
// Initializes the I18n instances
try {
I18n.initialize(DnaSubprojectI18n.class);
} catch (final Exception err) {
System.err.println(err); // logging depends on I18n, so we can't log
}
}
}

Notice that we have a static I18n instance for each of our internationalized strings. The name of each I18n variable corresponds to the key in the corresponding property file. Pretty simple boilerplate code.

The actual localized messages are kept same package as this class, but since we’re using Maven the file goes in src/main/resources):

  propertyIsRequired = The {0} property is required but has no value
nodeDoesNotExistAtPath = No node exists at {0} (or below {1})
errorRemovingNodeFromCache = Error while removing {0} from cache

Again, pretty simple and nothing new.

Using in your code
At this point, all we’ve done is defined a bunch of internationalized strings. Now all we need to do to use an internationalized string is to reference the I18n instance you want (e.g., DnaSubprojectI18n.propertyIsRequired). Pass it (and any parameter values) around. And when you’re ready, localize the message by calling I18n.text(Object...params) or I18n.text(Locale locale, Object...params). The beauty of this approach is that IDE’s love it. Want to know where an internationalized message is used? Go to the static I18n member and find where it’s used.

The logging framework used in JBoss DNA has methods that take an I18n instance and zero or more parameters. (Debug and trace methods just take String, since in order to understand these messages you really have to have access to the code, so English messages are sufficient.) This static typing helps make sure that all the developers internationalize where they’re supposed to.

With exceptions, we’ve chosen to have our exceptions use Strings (just like JDK exceptions), so we simply call the I18n.text(Object...params) method:

  throw new RepositoryException(DnaSubprojectI18n.propertyIsRequired.text(path));

We’ll probably make this even easier by adding constructors that take the I18n instance and the parameters, saving a little bit of typing and delaying localization until it’s actually needed.

Testing localizations
Testing your internationalization classes is equally simple. Create a JUnit test class and subclass the AbstractI18nTest class, passing to its constructor your DnaSubprojectI18n class reference:

public class DnaSubprojectI18nTest extends AbstractI18nTest {
public DnaSubprojectI18nTest() {
super(DnaSubprojectI18n.class);
}
}

That’s it. The test class inherits test methods that compare the messages in the properties file with the I18n instances in the class, ensuring there aren’t any extra or missing messages in any of the localization files. That’s a huge benefit!

One more thing …
Remember when I said there was only one class to our framework? Okay, I stretched the truth a bit. We also abstracted how the framework loads the localized messages, so there’s an interface and an implementation class that loads from standard resource bundle property files. So if you want to use a different loading mechanism for your localized messages, feel free.

Props to John Verhaeg and Dan Florian for the design of this simple but really powerful framework.

Filed under: techniques, testing, tools

Mockito 1.5

Mockito 1.5 has been released with several nice enhancements. Perhaps one of the most useful is the ability to spy on non-mock objects. In other words, you can verify that methods are called on the non-mock object. So, for example (from the release notes):

   List list = new LinkedList();
   List spy = spy(list);

   //wow, I can stub it!
   stub(spy.size()).toReturn(100);

   //wow, I can use it and add real elements to the list!
   spy.add("one");

   //wow, I can verify it!
   verify(spy).add("one);

I haven’t wanted to do this too often, but there was an occasion or two.

Another improvement is supposed to result in more readable code. Instead of

   stub(obj.someMethod()).toReturn(result);

it is now possible to write:

   doReturn(result).when(obj).someMethod();

Notice that the code is exactly the same length, so it’s clearly up to you whether you think it’s more or less readable. In addition to doReturn(), there’s also doThrow(), doAnswer(), and doNothing().

Check out the Mockito documentation for examples and details on how to use.

Filed under: techniques, testing, tools

To check or not to check (parameters)?

Some topics in software development are all about preferences and personal choices, and so it is with checking method parameters. When is it worth the extra developer time and execution time to check that the parameters of a method adhere to the expectations of the implementation?

What would you want to happen if you passed in a bad value to a method in some library? Would you want some cryptic exception thrown from the bowels of the library? No, chances are you’d rather get an immediate IllegalArgumentException telling you why your value is bad. Plus, you’d like to read in the JavaDocs what is considered a good (or bad) value for each parameter, and what happens when bad values are used.

Not all code has the luxury of a well-defined public API, where developers can focus on providing meaningful messages and documentation for a limited set of “public” methods. Open source libraries rarely have much control over what classes can be used. And maybe we don’t have to document much at all – can’t the developer using the library just look at the source?

My view is that developers should spend a little extra time to make life easier for their customers. This means good documentation and meaningful parameter checks for (most) methods that will or are likely to be used by clients. If you don’t make your source available to your customers, then your documentation probably needs to be “great”, not just “good”.

So what do we do on JBoss DNA? Well, we don’t really have a well-defined “public API” – we want our customers and users to be able to use JBoss DNA in ways that make sense for them, and that may include extending the behavior by writing a new implementation of an interface, using a small component by itself. In short, they’ll probably use our software in ways we don’t anticipate. Our philosophy is to try to be as helpful as possible, because a little bit of work on our part can make it a lot easier for our customers. So most of the time we’ll check the parameters, but in times where we don’t (because of performance, because a method is protected or private, or because there’s a good reason not to) we’ll still document what the method expects.

To make our lives easier, we’ve created a utility to help us to write simple and readable code that checks parameter values and throws meaningful exceptions when a value isn’t valid. Since JBoss DNA is internationalized, all exception messages should be internationalized, and this utility does this, too. So, we can write code like this:

    /**
     * Set the configuration for this component.
     * @param config the new configuration
     * @throws IllegalArgumentException if &lt;code&gt;config&lt;/code&gt; is null
     */
    public void setConfiguration( Configuration config ) {
        ArgCheck.isNotNull(config, "config");
        ...
    }

This trivial example shows a couple of things. First, the call to ArgCheck.isNotNull is pretty readable. (In hindsight, CheckArg.isNotNull(config,"config") may have been a better choice. There are other patterns that are even more readable, but then you start trading off performance for readability.) Other examples of ArgCheck methods include isPositive, isNotEmpty, hasSizeOfAtLeast, isGreaterThan, and containsNoNulls. Of course, there are many more.

Second, the ArgCheck methods will internationalize the exception message, so we pass the variable name (which should match the JavaDoc) as the second parameter to isNotNull(...) so that the exception message is meaningful. In this case, the English localization for the message would be “The config parameter may not be null”.

Third, the JavaDoc uses @throws to document the condition when an IllegalArgumentException will be thrown, and we chosen to not duplicate that in the corresponding @param. (Then, in cases where we don’t check the parameter, we don’t put a @throws and we do put the assumption in the @param. For example, “@param config the new configuration; may not be null.“)

When do we not use ArgCheck? We don’t often check parameters in protected methods, since we often have much more control over how the methods are called. And we never need to check parameters of private methods. And some public methods are called so frequently that even the overhead of checking the arguments is not worth it, but in these cases we also try to explicitly say this in the methods’ JavaDoc.

So, in general, we check a lot of our method parameters. What do you do?

Filed under: techniques

Capture thread-safety intent via annotations

Alex points out the difficulties of knowing whether a class can be used within a concurrent manner, and what we as developers can do when designing and documenting code for thread safety.

I whole heartedly agree with his suggestion to use JCIP annotations. We use them on the JBoss DNA project, and several of us have commented that using the annotations helped us think more about the thread-safety of the code we’re writing.

This, I think, is yet another benefit of using JCIP annotations: they help you better code because they’re a constant reminder of what you intend for the class’ behavior in a concurrent world.
After all, nobody likes those nasty Schroedinbugs and Heisenbugs.

Filed under: techniques

Testing behaviors

I mentioned in my last post how learning Ruby has made me a better Java developer. In particular, learning RSpec opened my eyes to a new way of unit testing.

RSpec is a library for Ruby that is built around Behavior Driven Development (BDD). In BDD and with RSpec, you focus on specifying the behaviors of a class and write code (tests) that verify that behavior. Whether you do this before you write the class is up to you, but I’ve found that outlining the class’ behaviors before (or while) I write the class helps me figure out what exactly the implementation should do.

You may be thinking that BDD sounds awfully similar to Test Driven Development (TDD). In some ways they are similar: they both encourage writing tests first and for fully testing the code you write. However, TDD doesn’t really guide you into the kinds of tests you should be writing, and I think a lot of people struggle with what they should be testing. BDD attempts to give you this guidance by getting the words right so that you focus on what the behaviors are supposed to be.

Let’s look at an example of a class that represents a playlist. The first step will be to decide what the class should and should not do:

Playlist

  • should not allow a null name
  • should not allow a blank name
  • should always have a name
  • should allow the name to change
  • should maintain the order of the songs
  • should allow songs to be added
  • should allow songs to be removed
  • should allow songs to be reordered
  • should have a duration that is a summation of the durations of each song
  • should not allow a song to appear more than once

Really, these are just the requirements written as a list. With BDD and JUnit 4.4, we can capture each behavior specification as a single unit test method. Initially, we’ll just stub the methods, but later on we’ll implement the test methods to verify the class actually exhibits that behavior. And since JUnit 4.4 gives us the freedom to name our test methods anything we want, let’s take a play from the RSpec playbook and put these behavior specifications directly in our test method names. Pretty cool! Just start listing the expected behaviors, and the test methods simply fall out:

public class PlaylistTest {
 @Test public void shouldNotAllowANullName() {}
 @Test public void shouldNotAllowABlankName() {}
 @Test public void shouldAlwaysHaveAName() {}
 @Test public void shouldAllowTheNameToChange() {}
 @Test public void shouldMaintainTheOrderOfTheSongs() {}
 @Test public void shouldAllowSongsToBeAdded() {}
 @Test public void shouldAllowSongsToBeRemoved() {}
 @Test public void shouldAllowSongsToBeReordered() {}
 @Test public void shouldHaveADurationThatIsASummationOfTheDurationsOfEachSong() {}
 @Test public void shouldNotAllowASongToAppearMoreThanOnce() {}
}

By capturing the requirements/behaviors in the test class, we don’t need to document them elsewhere. We can even add JavaDoc if the name isn’t clear. And, with a little work, we could generate that list of requirements by processing (or sequencing!) our code, as long as we follow the convention that the method names form a camel-case but readable English description of the behavior. (In fact, the org.jboss.dna.common.text.Inflector has a method to “humanize” camel-case and underscore-delimited strings, making it a cinch to output human readable code.)

And our test class even compiles. Pretty cool, huh? Oh, and that last requirement that’s not very intuitive? We now have a specification (test method) that verifies the seemingly odd behavior, so if a developer later on changes this behavior, it’ll get caught. (Of course the developer might just blindly change the test, but that’s another problem, isn’t it?)

But back to our development process. At this point, we could implement the test methods using the non-existent Playlist class. It may not compile, but we could then use our IDE to help us create the Playlist class and the methods we actually want. Of course, if this is too weird, you can always stub out the class and then implement the test methods. Personally, I like to implement some of the test methods before going any further, and we’ll use Mockito to stub out a Song implementation.

public class PlaylistTest {
  private Playlist playlist;
  private String validName;
  private Song song1;
  private Song song2;

  @Before
  public void beforeEach() {
    validName = "Pool party songs";
    playlist = new Playlist();
    song1 = mock(Song.class);
    song2 = mock(Song.class);
  }

  @Test(expected = IllegalArgumentException.class)
  public void shouldNotAllowANullName() {
    playlist.setName(null);
  }

  @Test(expected = IllegalArgumentException.class)
  public void shouldNotAllowABlankName() {
    playlist.setName("   ");
  }

  @Test
  public void shouldAlwaysHaveAName() {
    assertThat(playlist.getName(), is("New Playlist"));
  }

  @Test
  public void shouldAllowTheNameToChange() {
    validName = "New valid playlist name";
    playlist.setName(validName);
    assertThat(playlist.getName(), is(validName));
  }

  ...

  @Test
  public void shouldHaveADurationThatIsASummationOfTheDurationsOfEachSong() {
    stub(song1).getDurationInSeconds().toReturn(134);
    stub(song2).getDurationInSeconds().toReturn(205);
    playlist.add(song1,song2);
    assertThat(playlist.getDurationInSeconds(), is(339));
    verify(song1, times(1)).getDurationInSeconds();
    verify(song2, times(1)).getDurationInSeconds();
  }

  @Test
  public void shouldNotAllowASongToAppearMoreThanOnce() {
    stub(song1).equals(song2).toReturn(true);
    stub(song2).equals(song1).toReturn(true);
    assertThat( playlist.add(song1), is(true));
    assertThat( playlist.add(song2), is(false));
  }
}

Now we can complete the Playlist class and round out more tests as we discover new requirements and behaviors. Rinse and repeat. And we’ve done it all with just a little convention and JUnit 4.4, meaning it works in our IDE and in our continuous integration system.

The real change that BDD brings is just thinking differently. So while there are some Java frameworks for BDD (e.g., JDave and JBehave), the real benefit comes from changing your testing behavior, not changing your tools.

I hope this long post has inspired you to rethink how you do testing and to give BDD a try. Let us know what you find!

Filed under: techniques, testing, tools

ModeShape is

a lightweight, fast, pluggable, open-source JCR repository that federates and unifies content from multiple systems, including files systems, databases, data grids, other repositories, etc.

Use the JCR API to access the information you already have, or use it like a conventional JCR system (just with more ways to persist your content).

ModeShape used to be 'JBoss DNA'. It's the same project, same community, same license, and same software.

ModeShape

Topics

Follow

Get every new post delivered to your Inbox.