History of OnlineCourse/OffEngineTesting/Regression
Older Newer
2005-03-18 16:19:36 . . . . MembersPage/MarcellGal [emphasised to test the _compiled_ binary, not just the source]


Changes by last author:

Added:
This page is developer info about a proposed framework to automatically test GenBoard computer and config over millions of cycles There is a need for more formal testing of the firmware.

Automated as much as possible, and perferrably unit tests to ensure all is well with each revision/compile of the code. This is often called automated regression test. Why do we need this if it already "works"? Tests can be written to ensure corner cases are handled properly, and that changes in one area don't affect another unexpectedly.

http://slashdot.org/article.pl?sid=05/02/06/1914239&tid=128&tid=185&tid=126

----

Purpose:

* board (HW) testing before shipping

* testing firmware (including new developments)

* testing the compiler (compiled binary)

* testing and config,

* collecting runtime statistics

----

Setup

I can allocate

* a linux PC with 2 serial ports

* 2 GenBoard (v2.2 or v3.x)

I can allocate more genboard, but I'd be happier if 2 is enough.

The task of the participants:

* tester:

** receives commands from the PC;

** issues trigger and MAP signal, maybe CLT signal too;

** measures the reply parameters

** sends collected statistics to the PC

* tested:

** works like if it was installed in the car

* PC:

** controls the test

** stores statistics on disc (detailed log and histograms). This can be perl / C / java

** helps evaluating statistics (this can be java / octave / perl; maybe but unlikely C)

----

Parameters to measure:

For now:

* ignadv and dwell

* injposition and injpw

Maybe at the same time (through primary and secondary trigger). Optionally only one of those at a time.

----

Simulated Regression Testing

A good way is some kind of simulator / vitural machine (another way is if one board tests the other, what we also do). Ideally it could run at a C source level to facilicate debugging, but I've been unable to find anything quite that advanced so far... Fredrik's work on OnlineCourse/PcEmulation will come useful, for sure.

[Avrora]

It's an assembler level analyzer, simulator, debugger. What's interesting is that it has testing facilities, and the lead developer is keen to get that functionality finalized. It means we can build a basic model of the Genboard I/O in Java, and run a JUnit-ish test suite over the firmware.

With the release of Avrora 1.40 Beta, we have a means of black-box unit-testing the firmware. It has a gdb interface now, so it might be useful for debugging too if we're lucky.

TODO:

* Build ATMega128 CPU model (currently 128L is included - works fine, but runs at 1/2 speed).

* Build minimal Genboard platform model. In progress! Eight injectors and one RXTX serial port done so far. Need to get the serial port working properly. It opens, but so far no data...

* Pull USART connection functionality from the CPU and onto the platform. Not critical, just nice.

Some other simulators are:

* [AMT VMLab]. This looks very interesting with it's hardware simulation...

* [Simulavr and Simulavrxx].

* [AVR Studio].

Please feel free to contribute so we can realise this.

----

There are several ways we can attack the testing problem:

Code review of every line written - necessary, but not enough in itself

Every commited line is read by atleast one other developer.

Pros:

* You get input on commited code. Kind of like pair programming, but with one day of lag thus very inefficent.

Cons:

* Inefficent, the things that developer A missed may be missed by developer B

* Slow turn-around

Unit-testing - helps to avoid the tedious manual testing task after refactorizations

Either standard unit-testing, or use of [Design by Contract].

Pros:

* Several frameworks already exist, no need to reinvent the wheel

* Fast regression testing

* All new code have several tests attached to it, thus 'guaranteed' to work

Cons:

* Impossible to test everything (for example, hardware specific things)

* Just because a function works, doesn't mean it's 'well behaved'. (f.e. may turn off interrupts for several seconds or update global variables without turning off interrupts)

Systemwide testing

Testing the entire system by running through a set of scripted tests. A simple example may be 'send in this waveform, expect these RPMs'. More advanced tests would simulate an entire chain of events that happens at roughly the same time.

Could be ran both against the

* OnlineCourse/PcEmulation unfortunately this does not test the compiled binary, so it doesn't catch a naughty compiler (version).

* and the real GenBoard (given certain external stimuli).

Pros:

* Tests situations never caught by unit tests

Cons:

* Won't test everything, making tests that would cover every single thing that can happen would be an eternal task.

** Usually, one make several tests that cover most problems. And when problems occur in 'real world', another test is made that cover that certain problem.

I'm almost certain we need this. Almost the same as we do with alien ECM-s (log alien advance and fuel pulsewidths).

* stimulation (done for the basic subsystems: Mik's perl helpers to sweep through the space)

* logging (done for the simple cases)

* analysis (long way to go)