What happened on CITCONF day 2?
I saw the OpenSpaces concept work -- in person -- better than I could have imagined or predicted. I attended sessions with varying levels of focus, but all with passion-filled participants with valuable insights. All the sessions were topics proposed by attendees the prior day and organized into an ad-hoc conference spread across 5 conference rooms and several hours.
Here are my notes (from memory) of a few of the sessions I attended, some of the questions we discussed and where I have a decent recall, some of the points raised in answer to those questions.
Session 1: "100% code coverage + functional tests, what's next?"
I proposed this topic because the team I'm currently on is consistently trying to find ways to push the bar higher. We currently have an extremely high code coverage level (100%), and functional tests driven by a tool called Greenpepper. This session's discussion jumped around between a lot of different areas, such as:
- How do you know you're testing the right thing?
- correlate complex areas of code (based on metrics and empirical data) to areas which need more tests.
- record real usage of the application to determine what code is actual hit in production.
- use real data from production.
- use real data from day 1 of your development (the idea being that real data is never as 'clean' as data you fabricate).
- Sometimes when people talk purely about 'code coverage' levels, it's a smell.
- How can tests express the expectations of what they are expected to cover (so coincidental coverage doesn't count)?
- Sometimes a high hit-count of coverage on a give line of code is a smell.
- Running test coverage measurements with...
- just unit tests
- just functional tests
- just integration tests.
- Keeping code performant, pro-actively, re-actively, or predictively?
- express expectations up front, code to them, test with unit/functional tests
- pro-actively measure performance deviations over time (any known tools? nope)
- optimize after the fact -- when you need to.
- measuring performance with and without tests.
Session 2 - "What is the one true language for writing tests?"
The language of the testers? The language of the developers? A cross-over language?
The group brainstormed qualities we would look for in a testing language, and came up with a decent list which I thought looked a lot like a list of qualities I'd look for in a regular programming language. There was some brief mention of the possibilities of JetBrains Meta Programming System, and domain specific languages, Ruby and what it brings to testing, etc. The discussion then meandered into how to get teams to adopt best testing practices.
Session 3 - What is the future of build languages?
- What's wrong with ANT?
- Why has ANT been so successful?
- What about Maven?
- Procedural vs. Declarative? Specifying the how vs. what you want and what it needs.
- Several attendees with an impressive amount of experience (authors of build tools, well known book authors, authors of open source testing projects, and experienced technologists).
- Lots of 'Ruby, Ruby, Ruby'.
- I found the smaller sessions more engaging.