Giving tv4 a pulse with CI


on a java project using ant

  1. get it to compile locally
  2. download and start hudson  ‘java -jar hudson.war.
  3. configure hudson to checkout and run ‘ant compile’
  4. add unit tests
  5. configure hudson to checkout and run ‘ant test’
  6. move hudson and make it poll
  7. get emma to run so you can get some metrics
  8. add hudson emma plugin and configure hudson to run ‘ant emma-run-tests’
  9. using jsp? create an ant target to compile your jsp’s
  10. configure hudson to ‘ant emma-run-tests compile-jsp’

one of the most critical pieces of our infrastructure is our continuous integration server. we use hudson to build all of our projects and run our tests. setting up a new project that uses java or ruby takes about 10 minutes.  to get to the point of it taking only 10 minutes to configure a new job took a few weeks.  the following is how we got ci up and running for one of our large java projects that had no tests. this probably took 10 8 hour days to get this working but those days were spread out over a year and a half as we were working on other things. if you looking for more details on how to set up set up hudson this is not the place to get started this is more of the general approach we took.

start with getting it to compile locally

this seems simple but don’t take it as a given.  one of the first reasons i installed hudson was to make sure that our large code base would compile.  this was a challenge as we had developers from different projects working in pretty much the same areas of the code.  we use ant and have a target called compile but not everybody ran them and sometimes files were not properly checked in.  it took a couple of hours getting this to work as there was at some special set up that was required to get it to run at all locally. finally got to the point where running:

ant compile

was all it took to compile the our code.

next step is to  download and start hudson on my machine and get it up and running.   don’t get overly ambitious here. as long as you have java installed running:

java -jar hudson.war

should get hudson up and running and you can begin to configure hudson to checkout your code from your version control system. one tip is to create a special user in your version control system that only has read access.

configure hudson to checkout and run ‘ant compile’

in hudson under build  steps -> target  add compile this is the target that will be run when you trigger a build.

add unit tests

now is when hudson starts paying off.  adding unit tests to a large code base can be frustrating.  it can be done and it will seem like a monumental task if you have never used junit/testng before. even if you are familiar with junit/testng it will not be easy. but the pay off is more than worth it.  if you have work with unit testing before remember that the goal now is to get

ant test

to work. start off by creating a directory by your java src code called test this will be the root of your test hierarchy.  add a new target to ant that includes the your build classpath (ie the classpath used to run ant compile) you might at this point need to refactor your classpath from the compile target so your can use it as a part of the test classpath target. next add one test that passes and one test that fails. by this i mean something like this:
running ant test should now give you something like this:

if you get Tests run: 2, Failures: 1, Errors: 0. you can remove test file and start writing unit tests.

configure hudson to checkout and run ‘ant test’

change the build step from compile to test.

move hudson and make it poll

I ran hudson for almost a year on my local machine with little trouble. the hardest part was remembering to leave my machine on during vacation. the next step is to get a new home for hudson. the requirements are pretty low. one computer, power and network. the computer doesn’t have to have be anything special. 40Gig hard disk and 1G ram should do in the beginning. install your operating system of choice and install and configure hudson. now that it’s not local anymore you can make it poll version control for you. under build triggers -> PollSCM add */2 * * * *. this will make hudson check your version control every 2 minutes. if anything is checked in it will start a new build.

make hudson talk

Having hudson up and build every thing the next step i do is add an email notification task. using hudson’s default email notification plugin hudson will send an email everytime the build fails. to make this as visible as possible we created a special mail group that includes everyone that has access to our version control systems. to make it even more visible we use the email-ext plugin and configure it to send mail even when the build is successful. i know what you are omg! not more mail. my response is zomg it’s almost 2011 use a mail filter.

it’s now that hudson is becoming more and more relevant to you organization. especially if

  • your boss is getting mail every time it’s green (yeah!)
  • your boss is getting mail every time it’s broken(buuuuu). what do you mean it doesn’t compile?
  • your builds are red for very long. what’s keeping it from being fixed?

get emma to run so you can get some metrics

once hudson is up and running it’s time to get some metrics. why? well without measuring anything it’s really hard to know if it’s getting any better. it’s good to reflect on how the code base is shaping up now that more and more tests are being written. metrics are good points to see if the code that is being tested most is easiest to work with or how well the code that is used the most is tested. there’s tons of observations that can be made but only if you have some metrics. we use emma.  cause it’s free and not too hard to get working.  it’s basically the same as ant test target except it excludes some classes that are generated and needs both the build classpath and the test classpath. but once you can  ant emma-run-tests it’s a small matter to get hudson hooked make it pretty.

add emma plugin to hudson

here you will need to add the ‘hudson emma plugin’. and configure the emma section of you hudson job to find where the emma output is located. (ie build/emma/report/coverage.xml)

using jsp? create an ant target to compile your jsp’s

the next step was to get our jsp’s to compile. this came out of the need to upgrade our servlet container froim one version av resin to another. we have 500+ jsp pages and no real practical way to navigate to all of them but compiling them gave a bunch of benifits. we got rid of all the pages that had jsp scriptlets that did not compile whcih gave 503 errors every time some one navigated to them. there was not many of them as any jsp’s that werew surfed to often enough were kept in ok shape but the getting the few that did not work ended solving a couple of really low prioritized bugs.

the other benefit was we when we started to move from one version of resin to another (3.0 to 3.1) we found that resin 3.1 was much more strict in the way it interpreted of jstl and el syntax. this script should work for bother resin 3.0 and 3.1.

once that was in place it was a matter of running that script and correcting the errors. took about 3 hours to get everything clean. the amount of time and money this has saved not small. finding out that a jsp doesn’t work until it’s live causes a chain of patches, redeploys and testing that really eat time. now this is not an issue as hudson screams way before the code is deployed becuase it’s run every time hudson builds this job. fail early, fail fast.

configure hudson to ’emma-run-tests and compile-jsp’

The last configuration we have added is to make hudson run emma-run-tests and compile-jsp to hudson.

is it worth it? yes the further i get down the list of testing our code the better i sleep 🙂 . is there more to do ? always. we could add integration tests but in just this project we have reached a point of diminishing returns. this project has probably the worst test coverage of any of the project we have right now all the other have more than 50% and all ruby projects have at least 80%.


Found in translation

The amount of meta data available for the video clips used on our sites, such as TV4Play, is often very limited. Often the video clip have just a title and belong to a category. It has no description and no keyword. When the amount of videos is large it isn’t feasible to manually annotate all of them with more meta data despite it being critical if you want to find a certain video or videos on a certain topic.

When it comes to text there are a large number of maturing techniques for extracting useful information. Latent semantic indexing (PDF) and inverse document frequency (PDF) has become standard tools in systems that operate on text. To extract information from image and video data is however a more complicated feat. The research on image analysis and face recognition has led to software and services offering some basic support for browsing and cataloging objects and people in media but we are still far from functions that are reliable enough not to confuse unprepared users.

For some of our programs we actually have meta data that aren’t used in our search system so far. We have every word spoken recorded, not from using speech recognition but typed in by humans. Yes, it is the captioning/subtitling that we have for some of our programs, e.g. our home improvement show Äntligen hemma, the comedy panel game show Parlamentet, and Emmerdale (Sw. Hem till gården). By adding the texts from the shows to our Solr index it would be possible to find episodes on building your own sauna, that discusses the mishaps of the Swedish king or the episode where one of the characters is drunk.

Another use for this data is to visualize i. Word clouds for some of our shows looks like this when entered into Wordle (and being careful not getting too horrendous font choices):

Bonde söker fru
The rural dating reality show “Bonde söker fru” contains names such as “Ann-Katrin” and “Tomas”, words such as “farm”, “children” and “feel”.

Robinson (Survivor)  contains names such as “Daniel”, “Jukka”, “Elin”, the team names “Buwanga” and “Kalis” and words such as “feels”, “win”, “the competition”.
Prominently on the word cloud for the comedy panel game show Parlamentet we see the names of the participants “Annika”, “David”, and “Marika” (but “Josephine” in smaller size). We also see current topics such as “the king” and “Facebook”.
Hem till gården
From the words in the cloud for Emmerdale (sw. Hem till gården) we can guess that it is a drama series. “police”, “repulsive”, “crisis”, “family”, “Hello”, “careful”.
Äntligen hemma
No doubt this is the home improvement show Äntligen hemma with words such as “paint”, “wall”, “kitchen”, “material”, “wallpaper”, “color”.

A different perspective on analytics

About a week ago Disqus – the commenting platform we use on all of our sites – rolled out a new Analytics feature. Besides that it gives us a good overview of number of comments, likes, reactions and people they also express the amount of comments to something you can relate to in a different way:

So now we know that our fiftysix thousand comments on equals eight times the book Moby Dick if you read them from first to latest. Sites with fewer comments are compared to a smaller unit: number of SMS.

Three obvious things we missed in testing

This summer we released a new version of the weather site Vä (“The Weather Channel” in Swedish), this time built using the Ruby on Rails framework. Although we have a few tests in the test suite for the site it is far from complete. Three of the things that we have fixed since the release are results of phenomenons that shouldn’t come as a surprise. Three things that are not as certain as death and taxes but not far from it:
  1. There are months with 31 days.
  2. We actually have daylight saving time.
  3. It’s cold in the winter.

One: The first phenomenon was discovered in the end of  August when suddenly the forecasts for September 1st shown on the site was dated as October 1st. It took a while before we realized what happened. The code for setting the date looked something like this:

var forecastDay = new Date(2010,7,31);
Tue Aug 31 2010 00:00:00 GMT+0200 (CEST)
Fri Oct 01 2010 00:00:00 GMT+0200 (CEST)

Since August have 31 days but September only 30 the 31st day of September really is October 1st. So when we set the day after setting the month we are suddenly in October. The easy fix? To set the day of month before setting the month. The correct fix? To set year, month, and date in the same method call.

Two: Around the end of October a few days before Europe goes back to normal time we see that a number of forecasts are missing. Since we have had trouble with our data supplier, SMHI – the Swedish national weather service, earlier on they are our prime suspect. But when we look at the data files they supply they look to have all data they should have. There is however one thing that look different. After the following Saturday all forecasts are given another timestamp. The forecasts are now for 1pm instead of 2pm. The Javascript code assumes however that the timestamp will be the same as first day of the ten-day forecast. So during transition between daylight saving time and standard time some forecasts will be missing. I guess the error partly can be blamed on our data supplier since the time zone used should have been documented. But mainly it is our fault since the transition shouldn’t have come as a surprise.

Three: When the Flash rendering of the ten-day forecast suddenly went blank for some places up north last week we had a number of theories on the cause. Once again the weather service was a prime suspect but the data was in order so we began to think that it was something different with the data that the Flash code couldn’t handle. And sure enough, it turns out that the code rendering the curve of temperatures didn’t work when all temperatures in the forthcoming ten days were in the sub-zeros (Celsius degrees that is).

Testing: One thing to learn from these errors is to setup a number of tests for all corner cases you can come up with. All three of these errors can be classified as such. All three are also errors in code that runs on the front-end. There are tools for unit testing Javascript code, e.g. Jasmine, but they are not used as much as unit tests for backend code and we have just recently begun to use such tools on our TV4 Play project. There is work on unit testing in Actionscript as well (such as AsUnit), but since the code in question was developed by a third-party we should start by having them better document the work they do. And make it possible for us to test.