11.2. VE Learn

11.2.1. Operation

There are several parameters of the controlled process that a smart computer can measure from the input signals and behaviour:

  • VE tuning
  • ignition advance (DetonationDetection, IonSense)
  • injection timing (PortInjected/SequentialInjection)

Given reasonable starting values it can tune the VE table. With a NarrowBand O2 sensor, only the lower power range (loadsites) can be tuned., since at high power the mixture must be rich, so the NarrowBand O2 does not give any useful info. A WideBand O2 sensor helps in tuning the whole operating range.

VE autotune is implemented in our firmware. It's main input is the command of ego correction. (See NarrowBand page for the ego correction settings.) Secondary inputs are the confidence related parameters (RPM deviation, kPa deviation, distance from ego target).

Ve learning is a feature that helps the ECM learn from runtime EGO correction factors.

It works with narrowband and wideband lambda, but naturally wideband is much better and faster and more precise. The principle is EGO correction in any case.

Learning will not work on

  • loadsites that are not used during the learning drive
  • loadsites for which ego-correction was disabled
  • loadsites for which ego-correction was uneffective (eg. injectors railed to max, or something)
  • other enrichments (such as startup and cold enrichment) can effect the ego correction (so the ve-learn). For that (and other, eg. oil viscosity related) reasons people tune warmed engines.

The way it works is make the 1x1 (closest) grid value of the VE (j table) converge towards value for which EGO-correction would be *1.0 . The 2x2 grid (that not only adjusts the closest bin, but all 2x2 bins the engine is between of) does not work that good.

Alternative way: with log analysis Ve learning is implemented in the Genboard firmware. There is an alternative way which is done by logging the runtime data with a PC, and analyzing the log with mstweak3000 (see MegaManual). This also relies on EGO correction. Not sure if mstweak handles WBO2 properly. (I think it should because it checks the ego enrichment).

11.2.2. Configuration

" VE learning related configuration " ve_learn_max_power[0]=FF ve_learn_max_power[1]=FF ve_learn_rpm_scale=FF ve_learn_kpa_scale=FF ve_learn_ego_scale=FF ve_learn_tau=FF ve_learn_limit=FF

ve_learn_conf=00 "bit2: 2x2 box" "bit1: simulate" "bit0: enable"

The 2x2 box mode can modify all four neighbor VE entries at each adjustment whereas the non-box mode only modifies nearest VE entry.

VE learning config - learning speed

Remember that EGO correction is the key to VE learning. While EGO correction is very fast (damn fast with WBO2) the learning is slowed down somewhat for maximum suppression of transients. VE learning configuration mainly effects the speed of learning under certain conditions.

EGO configuration

andlt;code> Quick EGO: (ego_lag=5 !!!) ego_conf=00 ego_lag=05 ego_coolant=46 ego_maxtps=FF ego_maxmap=FF ego_minrpm=00 ego_maxrpm=FF ego_warmup=A0 ego_lean_limit=80 ego_rich_limit=80 ego_pid_kp=05 mt_unused=80 ego_delta=02 ego_target=19 ego_pid_window=01 andlt;/code>

VE learn config

I used a drastic (fast, and all areas allowed; instead of a smoother) VE learn config: andlt;code> ve_learn_coolant=46 ve_learn_max_power=FF ve_learn_rpm_scale=30 ve_learn_kpa_scale=30 ve_learn_ego_scale=50 ve_learn_min_weight=0A ve_learn_speed=FF ve_learn_limit=FF ve_learn_conf=01 andlt;/code>

VE Learn: can it be even faster ?

You can see that I use FF for learning speed but I think it is not fast enough. I saw the "d" VE diff table during driving and VE values changed much slower than EGO correction value reached the proper value. (EGO correction fast, learning is much slower)

Can it be configured so that VE value simply changes colletarally with EGO corr value? This would be the fastest VE table set method for the brave. Decrease VE_LEARN_SAMPLES value? I don't expect better results. You can try at your own risk (of saving some samples that come from transient).

For the samples that were collected during fast change of rpm, map or ego, the learning speed is reduced. These variables determine the thresholds above which (rpm variance, kPa variance and ego variance) the samples will be thrown away completely. Higher value means lower thresholds (more cautios, but slower learning).

  • ve_learn_rpm_scale
  • ve_learn_map_scale
  • ve_learn_ego_scale

11.2.3. Tuning

11.2.4. Testing and Monitoring