- Moved all the logic code of the "debug", "list" and "test" commands from the CLI layer to the engine layer.
- Up until now, the CLI modules implementing these commands contained all the logic to load Kyuafiles and iterating over them to find matching tests and applying the desired operation to them. This kind of code belongs in "driver" modules (aka controllers) of the engine layer, because there is nothing UI-related in them.
- After this refactoring, the code left in the CLI modules is purely presentation-related, and the code in the engine implements all the logic.
- The goal of these changes is to be able to hide the interactions with the database in these controllers. The CLI layer has no business in dealing with the database connection (other than allowing the user to specify which database to talk to, of course).
- Implemented a very simple RAII model for SQLite transactions.
- Some additions to the utils::sqlite bindings to simplify some common calling patterns (e.g. binding statement parameters by name).
- Preliminary prototypes at database initialization. This involves creating new databases and populating them with the initial schema, plus dealing with database metadata to, e.g. detect if we are dealing with the correct schema version.
- The code for this is still too crappy to be submitted, so don't look for it in the repository just yet!
- The design document details many things that should be part of the schema (e.g. "sessions"0, but I've decided that I'll start easy with a simplified schema and later build on top of it. Otherwise there will be too many clunky moving parts to deal with while the fundamental ideas are not yet completely clear.
- Fixes to let the code build and run again in NetBSD (macppc at least).
I've now been stuck for a few days trying to figure out what the best way to implement the conversion of (new) in-memory objects to database objects is, and how to later recover these objects. E.g. what the correct abstractions are to take test case results and put them in the database, and how to retrieve these results to generate reports later on. I now start to have a clear mental picture on how this should look like, but I have yet to see how it will scale.