Advanced Engineering Tech For the more hardcore LS1TECH residents

complex math and so???are skills

Thread Tools
 
Search this Thread
 
Old 06-06-2007, 08:26 AM
  #61  
TECH Addict
iTrader: (6)
 
P Mack's Avatar
 
Join Date: Jan 2003
Location: Phoenix
Posts: 2,382
Likes: 0
Received 2 Likes on 2 Posts

Default

Originally Posted by RedHardSupra
the time the air is exposed to heat is kinda important, that's why at lazy rpm/low airflow it's much more ECT biased. but you're right, what kind of relationship it is, i have no clue. any ideas what else influences it? if i have few potential suspects, i can figure out some weighing system and see which one influences it the most.

so far, great feedback. as a pimp said to his ho's: keep 'em coming!
Yes, the time is important, but it's not important that you know what the actual msec value of time is. It's only important that you know that time halves when rpm doubles, in other words t is proportional to 1/rpm. Once you have that relationship right (e^-t = e^-r/rpm), you don't have to know the actual value of time because r is empirical.
Old 06-06-2007, 09:00 AM
  #62  
TECH Addict
iTrader: (6)
 
P Mack's Avatar
 
Join Date: Jan 2003
Location: Phoenix
Posts: 2,382
Likes: 0
Received 2 Likes on 2 Posts

Default

I've thought about it some more and it's not really circular because you used wideband to figure out what airmass was instead of using logged airmass. But I do see another problem with finding r. You probably won't be able to compare your r to gm's r because you calculated yours from scratch and theirs was almost certainly empirical. The only way you could really calibrate r empirically would be using fuel trims or afr actual vs commanded. And the only way to isolate BIAS table errors from VE table errors then would be to change IAT independently at a constant MAP and RPM.

BTW, why is it you trust gm's VE tables more than the temp bias table anyway? It seems like any mods you do (heads, cam) would have a huge effect on VE but you could still trust the BIAS table.
Old 06-06-2007, 04:52 PM
  #63  
TECH Fanatic
iTrader: (1)
 
RedHardSupra's Avatar
 
Join Date: Sep 2003
Location: Laurel, MD
Posts: 1,904
Likes: 0
Received 1 Like on 1 Post

Default

the physical phenomenon is circular, but the way the pcm does it is just prior state->observation->adjustment->next state (aka definition of close loop).
it's a good example of where math and physics are too complicated sometimes for the real world. we can keep expanding that airflow depending on temperature depending on airflow loop, but ultimately it's going to create changes in temperature so small they're going to be irrelevant for our purposes. thus, while i'm modeling it with a nice proper function, ultimately it has to just be 'close enough' that we can make some lookup tables out of it (aka the BIAS table). it is a damn good question though of how should we approach it--instead of the Newtonian model, i could just make 'maf group' bins just like the BIAS table would dictate, and just optimize those. it'd probably be easier, but then it becomes an implementation nightmare as every new model/year have different number of the airflow bins, and they're all different range, so you'd have to adjust your algorithm for every application. with the e^(-r/rpm) model, it does not know about any of these implementation details, it just yields a parameter describing a curve, and what you do with it, how you convert it back to the BIAS table, with how many intervals, it's irrelevant, the algorithm stays the same, the precision is the same for all cases too, which is is nice. so it's your usual theoretical elegance vs pragmatic implementation issues argument.

as for the rest, i do not trust gm's VE numbers at all. the whole effort behind getting the BIAS right, is to then use it to nail VE perfect, no matter what temperatures you throw at it.

what i'm doing so far is trying to obtain the parameters describing a function that will yield temps that would cause current fueling.
another approach (bigger, more complex, thus left for later), is to tweak both the bias table and the ve table at the same time, tweaking AFR in such a way that AFRwb==AFRcommanded. i did that once already and it seems to work, but i wanted to go through the simpler case first (that's why i wanted to just deal with exp(-r) so i have only one param to adjust). i'm new to a lot of this, and i'm learning as i go, so i dont claim that any of this is anywhere near right. it does however seem to work, number-wise at least, until some testing will commence (aka my 'data-bitch' with a E40 comes back from vacation )

it would be the best to be able to tweak everything at the same time and let the computer decide what set of params converge the quickest. however, that's also exponentially more complex to do (nonlinear multiple regression fitting of large datasets anybody?), unless you got a simpler method. we could also use a real function for IFR as that seems to have some seriously non-linear issues sometimes as well.

for now, i just want the BIAS to settle, see if the new values will make VE less twitchy so we get rid of all the 'rich in morning, lean in afternoon' and 'my ve wanders around within 10 points never really converging' problems, there's been enough of them lately on all the boards. once that's settled, we can move onto other things.

for now i'm just happy to be able to have an intelligent conversation about this stuff, it's good to know i'm not the only nut that sees these issues.
Old 06-06-2007, 05:43 PM
  #64  
TECH Fanatic
 
ringram's Avatar
 
Join Date: Aug 2005
Location: Sunny London, UK
Posts: 1,690
Likes: 0
Received 0 Likes on 0 Posts

Default

Excellent work guys. Most/all of it over my head, but dont let that stop you.
Im not sure if any of the opensource matlab clones have the features to make them a viable alternative. But check out octave, scilab, and freemat. Might save a few bucks.
Old 06-06-2007, 05:44 PM
  #65  
11 Second Club
iTrader: (20)
 
SScam68's Avatar
 
Join Date: Sep 2002
Location: Albuquerque NM - The Land of 8000ft DA
Posts: 2,686
Likes: 0
Received 3 Likes on 3 Posts

Default

Originally Posted by RedHardSupra
math and physics are too complicated sometimes for the real world.
I have found this to be the other way around
Old 06-06-2007, 05:52 PM
  #66  
TECH Fanatic
iTrader: (1)
 
RedHardSupra's Avatar
 
Join Date: Sep 2003
Location: Laurel, MD
Posts: 1,904
Likes: 0
Received 1 Like on 1 Post

Default

Originally Posted by SScam68
I have found this to be the other way around
bwhahaha, great comment. what i meant was real world is too complex for math and physics, and these are often too complex for what we can put in hardware that can do stuff quickly enough to be usable, and do it while being cheap and reliable while experiencing 1G+ acceleration and 100*C swings
oh this is a one entertaining thread...
Old 06-07-2007, 08:38 AM
  #67  
TECH Fanatic
 
ringram's Avatar
 
Join Date: Aug 2005
Location: Sunny London, UK
Posts: 1,690
Likes: 0
Received 0 Likes on 0 Posts

Default

Some interesting, side progress on this front here http://forum.efilive.com/showthread.php?t=5451

Not sure the details of what they are using but looks like they are reading values direct from the ECM.
Old 06-08-2007, 10:00 PM
  #68  
TECH Senior Member
 
joecar's Avatar
 
Join Date: Apr 2002
Location: So.Cal.
Posts: 6,077
Likes: 0
Received 11 Likes on 9 Posts

Default

Good read.
Old 06-10-2007, 10:09 PM
  #69  
TECH Fanatic
iTrader: (1)
 
RedHardSupra's Avatar
 
Join Date: Sep 2003
Location: Laurel, MD
Posts: 1,904
Likes: 0
Received 1 Like on 1 Post

Default

quick update:
1. i tried modeling the temps as a simple lookup of some discreet values (aka just like what does the pcm does) and it works, but it's actually slower, as it's changing 16 variables, instead of one.
2. looked at some newer bin's and the BIAS table is not only referenced against airflow, but also against speed. so apparently they want even more precise temp model. so i think we're chasing a good goal. the bad part is of course there's even more damn variables to optimize. so instead of r*MAF now i have r*MAF*SPEED, and it finds some other r. how well does it work? i dunno, i'm been trying to read more about the estimation theory to get better understanding of what to optimize for (mean error square, min maximum error, there's some other interesting approaches)--anyone know anything about this statistics stuff?
3. i tried the full model for e38 stuff (aka 'no VE') and the temp modeling at the same time, just let the computer decide what makes the best overall model. it works, it's kinda slow tho. i had 6 vars for VE estimation, and another 6 (i did it the same way they do the 'noVE' table for giggles ) to model the new airflow and speed dependent bias. i didn't optimize it for all the different zones yet, i just treated the full map/rpm range as one huge 'zone' just to see if it works, and with 65k of sample data it took a while. a lot of different methods arrive at a lot of different results too, so there's not going to be a 'one true solution' but more of a 'how much longer do you want me to crunch on this?'

gentelmen, start your supercomputers....
Old 06-16-2007, 05:59 PM
  #70  
TECH Fanatic
 
ringram's Avatar
 
Join Date: Aug 2005
Location: Sunny London, UK
Posts: 1,690
Likes: 0
Received 0 Likes on 0 Posts

Default

Or read the ECM final values for the calculation, then use your AFR % error to create a traditional VE table in some spare memory space and do a JMP/POP to the VE table for the value rather than crunching the numbers.. Which looks like what the efi boys have on the cards.. (my guess)
Sure its theoretically a backwards step, but saves a lot of manic maths and is close enough for our purposes without DOD and VVT.
Old 06-16-2007, 06:07 PM
  #71  
TECH Fanatic
iTrader: (1)
 
RedHardSupra's Avatar
 
Join Date: Sep 2003
Location: Laurel, MD
Posts: 1,904
Likes: 0
Received 1 Like on 1 Post

Default

that'd be a huge step backwards, the whole point of having 30 parametrized zones of VE so we can calculate the VE number VERY precisely for every RPM-MAP tuple, not just be 'close enough' within 400rpm and ~5kPa.

the new way is going to work awesome for anything with quickly changing efficiencies (variable anything, big pointy cams, FI...) as it's able to create models for every set of conditions. yes, it's not going to be as easy as it used to (needs more fuel==crank up VE value sort of thing) but it's going to be helluva more precise.

just wait and see, gimme few more days...
Old 06-19-2007, 12:01 PM
  #72  
TECH Fanatic
 
ringram's Avatar
 
Join Date: Aug 2005
Location: Sunny London, UK
Posts: 1,690
Likes: 0
Received 0 Likes on 0 Posts

Default

You are right dude. The "Best" way is to use the new coefficients and factors etc. May the force be with you. (The torque force that is)
Old 06-20-2007, 09:27 PM
  #73  
TECH Fanatic
iTrader: (1)
 
RedHardSupra's Avatar
 
Join Date: Sep 2003
Location: Laurel, MD
Posts: 1,904
Likes: 0
Received 1 Like on 1 Post

Default

another update...
so i did a first test on a E40 car with the Temp Bias table. I concentrated on 9 cells from 60-71kPa and 2800-3600RPM, because they're above 4ms pulse width (i don't want to have my results muddied with short pulse adder s being wrong for nonstandard injectors), and they get a good range of airflow, from about 50 to 150 g/sec which is a good portion of the Bias table.
so we logged all the usual stuff (RPM, MAP, IPW, IFR, IAT, ECT, AFRwb) and created a spreadsheet calculating AFR from my experimental temp model, comparing it to the AFR logged with a wb. I used GMVE values hand pasted from the tune to cells that i was experimenting with (friggin annoying, HPT please implement MPID for GMVE or this is gonna be painful!).

all the differences (they're officially called residuals) were squared and summed together. then i ran a solver to minimize that sum of squared residuals, by manipulating only a coefficient in my temp model. i tried many different methods and settings for optimizing this thing, and it always yielded about 0.033 (it doesnt mean anything, it's just a coefficient that's supposed to be characteristic to this hardware setup)
so with that i created a bunch of BIAS values based on that coefficient. they were all lower than the stock curve, so all the people who were saying to set it to some fixed value, now you know you're wrong

Cyclone Chris as always was kind enough to be my E40 data bitch and went for two drives changing only the Bias curve. First drive was with the stock bias curve, and the second one with the Newton curve (i call it the Newton model since it's based on the Newton's Law of Cooling)

The results? ehh...no real difference, temps were calculated to be a little lower, which I thought would create less knock, as the PCM would be aware of increased airmass, but it actually got worse. i'm thinking it's because we didn't change the GMVE which used to compensate for the wrong Bias curve.

over the next few days i'll post more as Chris gets more data.

here's a little pdf showing two cases in the same areas, stock vs Newton bias curves.

http://www.marcintology.com/tuning/B...ckvsnewton.pdf

and here's a pic of the curves:


the other bias curve BIASdiscreet is another model i put in there for curiosity's sake. i just told excel to pick any numbers it wants for the particular Bias cells. overall it yielded a better fit (numerically) but in reality...just look at it, it's nuts.

hope this helps to understand where i was going with the whole temperature modeling thingy.

now just few more trials, adjust the fuel bias curve not just the 50+g/sec range, and we'll start calculating (not adjusting!) GMVE now that we should have better temperature model.
Old 06-25-2007, 11:36 AM
  #74  
TECH Fanatic
iTrader: (1)
 
RedHardSupra's Avatar
 
Join Date: Sep 2003
Location: Laurel, MD
Posts: 1,904
Likes: 0
Received 1 Like on 1 Post

Default

i got the full model optimizing the entire VE table and BIAS all at the same time in Excel 2007 for now. I ran tests on 1 cell, 9 cells and all cells. 1 cell ran quickly but it was optimized for just that one cell so when applying the same bias to other cells, they were thrown off. 9 cell block ran for a while to optimize, but it got it and it was very nice throughout the 9 cell range. so then i go for the full thing, and excel wont even try it, it just says it's too big of a problem.

so few alternatives:
1. do it in large chunks, as large as excel will allow. the problem with that is that the bias will be optimized only for this range and that's it, so there's a danger of bias not being really representative of what it is in physical sense. bias needs to be optimized globally and there's no way to do it in excel so far (of course there's silly tricks to try but i'm not there yet)
2. get a beefy computer, do it in matlab/octave for the full table and not care.
3. instead of optimizing multiple cells at a time, divide them into zones, and express them as a set of parametric response surfaces, just like E38/E67 do. then no matter how many cells we got in a zone, we can describe it with 6 parameters per zone. once that's done, numbers can be generated for the needed rpm/map tuples and thus converted to VE or GMVE.
with large enough zones, that's an improvement over what excel can do now. however, excel isn't too good about the parametric fitting for some reason. so we're back to matlab again.

on another note, is this useful/entertaining/informative to anybody or is this borderline mental masturbation?
Old 06-25-2007, 06:07 PM
  #75  
TECH Veteran
Thread Starter
iTrader: (1)
 
gametech's Avatar
 
Join Date: Sep 2004
Location: Stockbridge GA
Posts: 4,068
Likes: 0
Received 432 Likes on 307 Posts

Default

I think this will end up being extremely useful, slightly informative (mostly over people's heads), and definitely entertaining (in the watching a train wreck sort of way). It only becomes mental masturbation when you figure something out, but then can't translate it into anything useful.
Old 06-25-2007, 07:23 PM
  #76  
On The Tree
 
trumperZ06's Avatar
 
Join Date: May 2003
Location: Kennesaw, Ga
Posts: 129
Likes: 0
Received 0 Likes on 0 Posts

Default

Will help tuning when you get it all figured out, Red !!!
Old 06-25-2007, 09:50 PM
  #77  
TECH Senior Member
iTrader: (2)
 
Robert56's Avatar
 
Join Date: Aug 2003
Location: Tacoma, WA
Posts: 9,557
Likes: 0
Received 1 Like on 1 Post

Default

Originally Posted by RedHardSupra
i got the full model optimizing the entire VE table and BIAS all at the same time in Excel 2007 for now. I ran tests on 1 cell, 9 cells and all cells. 1 cell ran quickly but it was optimized for just that one cell so when applying the same bias to other cells, they were thrown off. 9 cell block ran for a while to optimize, but it got it and it was very nice throughout the 9 cell range. so then i go for the full thing, and excel wont even try it, it just says it's too big of a problem.

so few alternatives:
1. do it in large chunks, as large as excel will allow. the problem with that is that the bias will be optimized only for this range and that's it, so there's a danger of bias not being really representative of what it is in physical sense. bias needs to be optimized globally and there's no way to do it in excel so far (of course there's silly tricks to try but i'm not there yet)
2. get a beefy computer, do it in matlab/octave for the full table and not care.
3. instead of optimizing multiple cells at a time, divide them into zones, and express them as a set of parametric response surfaces, just like E38/E67 do. then no matter how many cells we got in a zone, we can describe it with 6 parameters per zone. once that's done, numbers can be generated for the needed rpm/map tuples and thus converted to VE or GMVE.
with large enough zones, that's an improvement over what excel can do now. however, excel isn't too good about the parametric fitting for some reason. so we're back to matlab again.

on another note, is this useful/entertaining/informative to anybody or is this borderline mental masturbation?
This is absolutly great stuff, and you kinow what, I can allmost follow along, or at least enough to get an idea of what your doing.
Robert
Old 07-10-2007, 01:30 PM
  #78  
TECH Fanatic
iTrader: (1)
 
RedHardSupra's Avatar
 
Join Date: Sep 2003
Location: Laurel, MD
Posts: 1,904
Likes: 0
Received 1 Like on 1 Post

Default

the saga continues...

the first experiment got verified. first time around i used a 65k samples of a long road trip and calculated the bias coefficient to be k=0.03049 for this one particular car. so we put in a bias curve based on that coefficient, didnt change anything else, and went for another 65k samples road trip. of course the airmass calculated from the different bias curved messed up the fueling at little and the car seemed to be more prone to knock (or just lugging luggage and wife at low rpm in 6th gear) but that was to be expected, i just wanted to see that changing the bias curve really affects fueling. so last night i ran through the new log, and calculated k to be 0.029109. that's close to the old value, and it was consistent which is what i was looking for, as k supposed to be a hardware descriptor, constant until the hardware changes.

then i set up a spreadsheet containing all the usual log data, the GMVE table, and added few new fields, namely the temperature field calculated from the new bias coefficient, and then the airmass which the new temp would yield. another field compared the airmass predicted (the one we just calculated) with the dynamic cylinder value (the one computer used so far to make all the fueling decisions). this way i have predicted vs observed columns, allowing me to calculate the difference between them (it's called the residual)

then i calculated their sum of square residuals (SSR) for all samples. it's a basic statistics trick where you sum up the square of the difference between the predicted and the observed values. the smaller it is the better, because you want errors to be zero everywhere. it's a square so identical errors on both sides dont negate each other. it's a basic but very efficient method.

now we're ready to optimize the GMVE table
fire up Excel's solver, and tell it to optimize the all GMVE values with the goal of minimizing the SSR. boom! excel doesn't even wanna try it, with a 'problem too large' excuse (i get that a lot )
ok, let's try a subsecion. i look at the original histogram of the data, and find an area of 30 VE cells with most sample count and biggest AFR%Errors.

Excel runs! and runs and runs and runs some more....

but in the meantime, i see the SSR is going down!
after like 5 mins of crunching on my amd x2 4400 (yes, it's that frigging computationally intensive!) the solver considers the solution to 'converge' aka farther refinement to the values yield changes so small it's not worth it anymore. the SSR gone down a bit! of course, we must remember that it's not going to ever be zero, as the GMVE used to calculate the airmass are the same within an entire GMVE cell, which for a E40 PCM is 400rpm and 5.625kPa range. so whether you have a piece of data at 58kpa and 3000rpm, or 62kpa and 3400rpm, it's using the same GMVE value, even though they're gonna be different in reality. however, with our 'goodness metric' of measuring SSR, the GMVE values in the cell should end up in such a way they create the smallest total error for all the different values within each GMVE cell.

so the metric gone down, just like we wanted to, and the values it arrived at seem sensible, so i picked another range of 30 cells that this log had data for, and ran the optimization again. few mins later, the metric gone down again. rinse repeat, i did it 4 times covering the full area that the log had the data for.

ladies and gentlemen, we have a way to determine the optimum Bias and GMVE values in a E40 computer!

of course it's not verified yet (my tester is gonna try to get some more logs in the next few days) and we'll see if it's any better than before.

ACHTUNG ACHTUNG, ubergeek warning applies to anything below

there were few interesting bits to all of this :
1. when i got the k value, i wanted to see how sensitive the sum of square residuals are to the different k values for the new dataset. i created a simulation with k changing from 0.001 to 0.999 with 0.001 interval. again, 10 mins later (this is MatLab this time, which is shitloads faster than excel) i get a graph of the SSR for the different values of k. the line seems to be just going up from the low range to the high range, without a particular optimal value, which is not what i expected, as my calculations came up with a very particular k coefficient. so intrigued, i start zooming around to where i expected my optimal value, and there is a significant 'dip' right at the optimal value, however with the large number and broad range of values i generated the simulation with made the graph so dense it got lost in resolution!
when scale and resolution issues show up, the logical thing is to put things on logarithmic scales. i graph a function of k vs log(SSR) and the 'optimal dip' is much more visible. however, the interesting part is that the range where the SSR is low is very small, the 'dip' drops down very sharply, that's why with more values on the graph it just got lost in comparison to other huge values around it.

this is a very interesting observation, because this means we have to absolutely NAIL the Bias curve! otherwise the errors are just going to baloon, even if our estimation of the bias coefficient is close but not close enough. so to all the people that mangle Bias curves and wonder why VE doesn't converge, here's why: BIAS is about precision across the full range. close enough won't cut it. the military dudes in my school have a saying about 'horseshoes and hand granades' precision, and this is a good example of the opposite: approximation wont work here, unless you're a mathematical idiot savant or just freakishly lucky.

2. after i optimized the GMVE for the new Bias curve, i wanted to see what changed and how much in comparison with the old GMVE table obtained with the regular 'paste multiply AFR%error' method, which never quite converged and kinda worked, but the idle still left much to be desired. the new Bias curve had more bias toward ECT at low airflows (<40gsec) and much more toward IAT (>40gsec). so with that change we can see that at higher airflow spots on the GMVE table, GMVE was artificially inflated to make up for wrong BIAS with higher values. however if we were to calculate airflow using that artificially inflated GMVE value when the airflow wasn't that high, it would calculate a higher airmass value, dumping more fuel than necessary, causing more AFR swings and scatter.
by symmetry, the opposite would happen at low airflow (idle and just off idle). more IAT biased values created artificially higher airmass causing lower GMVE values to compensate for the wrong temperature calculation.
however, if you're driving in a cell that normally gets used for idling (parking lots, coasting down, very light takeoff...) you get a different Bias value (more airflow) but you're using the same 'overcompensated' GMVE value, so you have both higher than necessary GMVE and a bias value calculating a lot more airflow than you're getting, dumping way too much fuel. this probably explains why we could never get rid of all the low speed surging.

the new GMVE values reflected all that. in idle and decelleration areas the numerical values of GMVE went down as far as 200mg*K/kPa (which is ~10% is some cells!), and in the higher airflow areas it went up about 100mg*K/kPa, getting in line with the proper aircharge temperature calculations.

well, we'll see the tests soon enough. i hope this works, cause it's been a lot of frigging work. right now it's all very manual, i use matlab to get k, and excel to optimize GMVE tables, so i gotta unite it all into some sort of automated tool, or it's going to be unusable.

i'll try to post some pics demoing some of the changes later, i gotta run for now. enjoy!
Old 07-16-2007, 11:43 AM
  #79  
On The Tree
 
PurplePiss's Avatar
 
Join Date: Jul 2002
Location: Someplace high in the Himalayas
Posts: 113
Likes: 0
Received 0 Likes on 0 Posts
Default

Just a comment. I once had a huge Access database that I was almost finished with and it wouldn't let me add a few more columns. I had it maxed out or so I thought. After some searching on the net, I found a registry entry that M$ set to keep slower computers from barfing on too much data. Perhaps Excel has a similar value in the registry that you could increase to get it to attempt your calculation. WHAT your looking for, I dunno though.

That being said, I would love to help you test this stuff on my 05 LS2 A4 GTO. Let me know. Jason
Old 07-30-2007, 11:47 AM
  #80  
TECH Fanatic
iTrader: (1)
 
RedHardSupra's Avatar
 
Join Date: Sep 2003
Location: Laurel, MD
Posts: 1,904
Likes: 0
Received 1 Like on 1 Post

Default

e38/e67 tuning is done.

the bias excursions i've described above are mandatory, as we must have a good temperature model, as the changes in IAT, ECT, airflow, and speed (for the newer cars) can cause >10% swings in airmass within our usual operating ranges.

once that's done, you can properly calculate your 'normalized airmass' or the GMVE table. for the older cars with a regular VE table, it's just a simple proportion, so it's just one more extra step.

once we know the GMVE, i fit the surface described by the 6 parameters. that's called the response surface, and it's a really neat applied math trick. all you need to know is that it an equation with 6 parameters and it spits out one zone of GMVE each

if you use Matlab, fitting the response surface to the generated (real, corrected) new normalized airmass numbers yields an optimal fit, yielding least errors across the board. then you do it 30 times for 30 zones. together they create your new VE table.

at this point for e38/67 you're done. for e40 you gotta create values for each 'normal' cell using the response surface with the new parameters, using the correct parameters when you're hopping from one zone to another. for the older stuff, convert from GMVE to VE. if your car is really old like mine and uses the half resolution VE crap for SD mode, then generate the values for different number of cells. it's very very flexible and elegant. kudos to whoever came up with this at GM.

this way i can tune all types of VE's with ONE method, the most precise and flexible method we got. no more interpolating and lookups, everything is dynamically generated on the fly (that's probably why they had to use a new PCM, it's much more calculation intensive).

the real genius is going to show when we add things like variable duration cams and such, as this model extends itself very easily. for example we use 6 terms to tune only based on MAP and RPM (aka the traditional VE table) but if we add 2 more (ie. intake and exhaust cams) the number of parameters to tune goes up to 15. add another 2 dimensions and you end up with 28 parameters. this sounds scary, but it's a lot more easier to tune as you are not victim of 'the curse of dimensionality.' if you wanted to do 3 variables instead of 2, you'd end up with many VE's. if you had another variable with 20 intervals, you'd end up with 20 VE tables--we're having problems getting one of them tuned, now think of doing it 20 times VTEC guys had this problem for years, but they only gain one extra set of VE tables as they only have 2 cam profiles, not an infinitely variable one. with this method, you can add a bunch of other variables that affect VE, and it just yields few more parameters. the fun part is that the method for calculating/optimizing these parameters is exactly the same, the computer will just crunch a little longer.

this is THE way to go on tuning now, forget autove bullcrap.

testing and automation to follow of course. i could use some testers though; right now i have a NA e40 and a older FI fbody to test on, but if someone's got one of them newfangled e38/e67 computers, or a 2.5bar e40, that'd be helpful too. hit me up if you wanna test.

much relieved,
Marcin


Quick Reply: complex math and so???are skills



All times are GMT -5. The time now is 02:48 AM.