[1.0.6.4] +20 counts as LOW suitability
Moderator: MOD_DW2
[1.0.6.4] +20 counts as LOW suitability
Are there hidden decimals or have you changed the code?
Because +20 suitability was always good to go...Re: [1.0.6.4] +20 counts as LOW suitability
Been like this for a while.
Game likes to round things for us - so you're looking at a 19.5..19.99 or something - so not quite a 20. I've seen "good 20" and "bad 20" right next to each other in my colonization lists. It's weird, but it makes some sense...
Game likes to round things for us - so you're looking at a 19.5..19.99 or something - so not quite a 20. I've seen "good 20" and "bad 20" right next to each other in my colonization lists. It's weird, but it makes some sense...
- Cyclopsslayerr
- Posts: 338
- Joined: Fri Mar 18, 2022 3:33 pm
Re: [1.0.6.4] +20 counts as LOW suitability
In most cases, the game rounds up, so 19.1 is displayed as 20. I have a pic somewhere showing how 9+9+15=32, which in the real world is 33, but hey...mordachai wrote: Sat Aug 13, 2022 3:56 pm Been like this for a while.
Game likes to round things for us - so you're looking at a 19.5..19.99 or something - so not quite a 20. I've seen "good 20" and "bad 20" right next to each other in my colonization lists. It's weird, but it makes some sense...
Re: [1.0.6.4] +20 counts as LOW suitability
Sorry, folks, just to make sure that the 70% shown in Game Editor are indeed 70%:Cyclopsslayerr wrote: Sat Aug 13, 2022 4:40 pmIn most cases, the game rounds up, so 19.1 is displayed as 20. I have a pic somewhere showing how 9+9+15=32, which in the real world is 33, but hey...mordachai wrote: Sat Aug 13, 2022 3:56 pm Been like this for a while.
Game likes to round things for us - so you're looking at a 19.5..19.99 or something - so not quite a 20. I've seen "good 20" and "bad 20" right next to each other in my colonization lists. It's weird, but it makes some sense...
Planet suitability: 70%... 0%... run... 70%... run... 69%... run... +1%... run... all LOW... 71%... OK... 70%... LOW... 70.1%... OK... 70.000001... LOW... 70.00001%... OK
data\Races.xml lines 78-93 (Human):
Code: Select all
<ColonizationSuitabilityModifiers>
<OrbTypeFactor>
<OrbTypeId>7</OrbTypeId>
<Factor>0.1</Factor>
</OrbTypeFactor>
<OrbTypeFactor>
<OrbTypeId>17</OrbTypeId>
<Factor>0.1</Factor>
</OrbTypeFactor>
<OrbTypeFactor>
<OrbTypeId>18</OrbTypeId>
<Factor>0.1</Factor>
</OrbTypeFactor>
</ColonizationSuitabilityModifiers>
<MinimumSuitabilityForColonization>
</MinimumSuitabilityForColonization>
Code: Select all
<OrbTypeId>27</OrbTypeId>
<Category>Planet</Category>
<Name>Mangrove Forest</Name>
<QualityRangeMinimum>0.5</QualityRangeMinimum>
<QualityRangeMaximum>0.8</QualityRangeMaximum>
Code: Select all
70.000001 => "70%" LOW
70.00001 => "70%" OK
Code: Select all
69.499996185 => "69%" LOW
69.499996186 => "70%" LOW
Re: [1.0.6.4] +20 counts as LOW suitability
So the reason for this is doubles. Or floating point - 32bit or 64bit - on standard semiconductor hardware. There's an IEEE standard for these things - they're implemented in CPU and GPU hardware, and they're... sloppy.
Floating point math is hard to do well & fast - and since the digits can grow infinitely, but data storage has always been either a critical or at least very important consideration (more data = slower throughput) - the approach was one of "good middle ground" - trading off accuracy for flexible value range and speed.
But floating point values using this approach are... approximate. Certain values cannot be represented perfectly - only close. Sometimes this difference is meaningless - a rounding error that never matters - and other times - it gives you crazy software behavior, like in this case.
It's always possible to use fixed point instead of floating point if your values are in a range that can be reasonably represented within a scalar type such as a 64 bit integer, which is not lossy, but has a fixed range (there are tricks to adjusting the basis, but in the end, it has a fixed range somehwere on your number line, and a limited resolution - i.e. digits past the decimal, depending on how your use them for fixed point notation).
I've always preferred integers & fixed point numbers due to their stable and predictable behavior - but they're a bit more work to design and use vs. grabbing a float32 or float64, depending on your language of choice, and any libraries you want to harness, and etc. So you can't always use what you like - sometimes "when in Rome, do as Romans do" applies.
Anyway - TMFI - but your rounding errors are due to IEEE floats.
Floating point math is hard to do well & fast - and since the digits can grow infinitely, but data storage has always been either a critical or at least very important consideration (more data = slower throughput) - the approach was one of "good middle ground" - trading off accuracy for flexible value range and speed.
But floating point values using this approach are... approximate. Certain values cannot be represented perfectly - only close. Sometimes this difference is meaningless - a rounding error that never matters - and other times - it gives you crazy software behavior, like in this case.
It's always possible to use fixed point instead of floating point if your values are in a range that can be reasonably represented within a scalar type such as a 64 bit integer, which is not lossy, but has a fixed range (there are tricks to adjusting the basis, but in the end, it has a fixed range somehwere on your number line, and a limited resolution - i.e. digits past the decimal, depending on how your use them for fixed point notation).
I've always preferred integers & fixed point numbers due to their stable and predictable behavior - but they're a bit more work to design and use vs. grabbing a float32 or float64, depending on your language of choice, and any libraries you want to harness, and etc. So you can't always use what you like - sometimes "when in Rome, do as Romans do" applies.
Anyway - TMFI - but your rounding errors are due to IEEE floats.
Re: [1.0.6.4] +20 counts as LOW suitability
Ok, here we go. Probably it was too obvious. But let's see:
Steam store page
After trying some online tools, which can be accessed by anybody interested in checking out this by their own, I chose IEEE 754 Calculator provided by a professor teaching Mathematics and Informatics at the HAW Hamburg. So chances are high, that he knew how to set up the tool properly. You can perform basic arithmetic and switch easily between binary64 and binary32, which will come in handy in a moment.
Let's start with binary64: So far, so bad. But I did try some more values before:
Well, both results look OK, right?
Now switch to binary32 and use the same values:
Now we end up with the results I have gathered using the Game Editor.
Looks like someone is promoting 64-bit only while actually using Single-precision floating-point (binary32) instead of Double-precision floating-point (binary64).
While I can not explain how 69.499996185 and 69.499996186 are converted in Game Editor, I can explain this:
So again, binary32 is used instead of binary64 resulting in the initial absurdity of 70% - 50% < 20% or LOW
How to prevent all these miscalculations?
Avoid decimals. You know % in every day real life. Most currencies use them. USD and cent, EUR and cent, GBP and penny (atm), YEN and sen, aso, asf.
Instead of using decimal values multiply them by 100 when starting a new game. Since this is the only time all those fancy decimals in xml-files will be read, the overhead of multiplying them shouldn't delay a game start significantly. Now you have floating-point values without decimals. Any basic arithmetic will work as intended.
Similar can be achieved for radians. Instead of using 2.356 for 270°, avoid using π in xml at all costs. It's not worth adding unnecessary rounding errors and using degree instead of radian should also be more intuitive. It seems Archimedes and Co had similar thoughts when it comes to exploiting degrees
Steam store page
Distant Worlds Returns! Distant Worlds, the critically acclaimed 4X space strategy game is back with a brand new 64-bit engine, 3D graphics and a polished interface to begin an epic new Distant Worlds series with Distant Worlds 2.
So it seems we could agree, that floating point would stick to 64-bit only and thus use binary64, right?MINIMUM:
OS: Windows 8, 10 (64-bit only) - The game runs on Windows 7 but no support will be provided
After trying some online tools, which can be accessed by anybody interested in checking out this by their own, I chose IEEE 754 Calculator provided by a professor teaching Mathematics and Informatics at the HAW Hamburg. So chances are high, that he knew how to set up the tool properly. You can perform basic arithmetic and switch easily between binary64 and binary32, which will come in handy in a moment.
Let's start with binary64: So far, so bad. But I did try some more values before:
Code: Select all
0.7000001 - 0.5 ≅ 0.20000010000000001
0.70000001 - 0.5 ≅ 0.20000001
Now switch to binary32 and use the same values:
Code: Select all
0.7000001 - 0.5 ≅ 0.2000001
0.70000001 - 0.5 ≅ 0.19999999
Looks like someone is promoting 64-bit only while actually using Single-precision floating-point (binary32) instead of Double-precision floating-point (binary64).
While I can not explain how 69.499996185 and 69.499996186 are converted in Game Editor, I can explain this:
Code: Select all
0.695 ≅ 0b00111111001100011110101110000101
0.005 ≅ 0b00111011101000111101011100001010 +
-----------------------------------------------
0.7 ≅ 0b00111111001100110011001100110011 .......... 0.70
0.69499993 ≅ 0b00111111001100011110101110000100
0.005 ≅ 0b00111011101000111101011100001010 +
-----------------------------------------------
0.6999999 ≅ 0b00111111001100110011001100110010 < 0.70 ... 0.69
How to prevent all these miscalculations?
Avoid decimals. You know % in every day real life. Most currencies use them. USD and cent, EUR and cent, GBP and penny (atm), YEN and sen, aso, asf.
Instead of using decimal values multiply them by 100 when starting a new game. Since this is the only time all those fancy decimals in xml-files will be read, the overhead of multiplying them shouldn't delay a game start significantly. Now you have floating-point values without decimals. Any basic arithmetic will work as intended.
Similar can be achieved for radians. Instead of using 2.356 for 270°, avoid using π in xml at all costs. It's not worth adding unnecessary rounding errors and using degree instead of radian should also be more intuitive. It seems Archimedes and Co had similar thoughts when it comes to exploiting degrees

Re: [1.0.6.4] +20 counts as LOW suitability
Awesome write-up @Thineboot!
Yeah - I'm not surprised that something in the software chain is using f32 instead of f64 -- very likely some UI widget or library that never updated to f64 -- someone thought "that's overkill" and f32 is fine.
Yeah, doing a "multiply by 100" is essentially what I'm talking about for "fixed point" -- you just choose an arbitrary basis - 100ths, 1000ths, 10000ths, whatever -- an i64 has approx +- 9,223,372,036,854,775,807 to work with -- so just consider the final 6 digits to be decimals or whatever, depending on your use case (that would result in +- 1,000,000,000,000.000000
(well, more than that, but at least that much can be guaranteed with perfect accuracy to that degree of 6 decimal places).
int64s are also much faster at computation than f64s or even f32s, in almost all cases (certain 3D graphical operations on GPU dedicated hardware must be f32 or f64, but are massively parallelized and unrelated to the sorts of discussions we're having - and result in triangle, or pixel computations, which "close enough" is indeed, close enough - a super subtle floating point error in computing some pixel's color or visibility somewhere on the plane of billions of pixels for sub 1/60th of a second is just not a problem - and a good use of floats).
But when your numbers ought to matter - and be correct - both for display and computational purposes - floats should be verboten to you. You should, as a software engineer, know better than to use them anywhere in your code pipeline - unless sloppy is fine... which it isn't nearly as often as most software engineers think it is.
Yeah - I'm not surprised that something in the software chain is using f32 instead of f64 -- very likely some UI widget or library that never updated to f64 -- someone thought "that's overkill" and f32 is fine.
Yeah, doing a "multiply by 100" is essentially what I'm talking about for "fixed point" -- you just choose an arbitrary basis - 100ths, 1000ths, 10000ths, whatever -- an i64 has approx +- 9,223,372,036,854,775,807 to work with -- so just consider the final 6 digits to be decimals or whatever, depending on your use case (that would result in +- 1,000,000,000,000.000000
(well, more than that, but at least that much can be guaranteed with perfect accuracy to that degree of 6 decimal places).
int64s are also much faster at computation than f64s or even f32s, in almost all cases (certain 3D graphical operations on GPU dedicated hardware must be f32 or f64, but are massively parallelized and unrelated to the sorts of discussions we're having - and result in triangle, or pixel computations, which "close enough" is indeed, close enough - a super subtle floating point error in computing some pixel's color or visibility somewhere on the plane of billions of pixels for sub 1/60th of a second is just not a problem - and a good use of floats).
But when your numbers ought to matter - and be correct - both for display and computational purposes - floats should be verboten to you. You should, as a software engineer, know better than to use them anywhere in your code pipeline - unless sloppy is fine... which it isn't nearly as often as most software engineers think it is.
Re: [1.0.6.4] +20 counts as LOW suitability
I missed to write the obvious, multiply and round. Otherwise you'd carry over the error.
I never took data after a disaster or during terraforming. Is the suitability constantly adjusted, like 1/365 per day - are there leap years in DW2? How "long" are "seconds", how many days, on normal speed? Questions over questions that don't seem to matter. But they matter, because they make the difference between floating point and integer, which one has to be used to keep track of the values. - or are they changed one per year?
The reason I didn't mention using integer at all is that only they know whether they can make use of them or have to stick for some unknown reasons to floating-point. Sticking to a single type makes it easier as you don't have to think about what data type you really need at any given point. Yes, type conversation is everywhere, no programmer has to really care anymore, but each command takes time and when you're talking about a real-time game with that amount of objects, each step counts. It's been years, that I took excessive timing tests. I assume they know what works fastest for them.
The problem arise when you don't expect results like 70 - 50 < 20 but rely on the human autocorrection. A so called AI doesn't look at 70 - 50 and says, whatever you're telling me, ALU, it's 20. It doesn't care, the result is < 20 and that's what matters. And all Empires will "see" a LOW while we humans think it's OK. Since we can only argue in a forum about this the Empires are right and we dumdums are wrong - until a dev takes it serious enough.
I never took data after a disaster or during terraforming. Is the suitability constantly adjusted, like 1/365 per day - are there leap years in DW2? How "long" are "seconds", how many days, on normal speed? Questions over questions that don't seem to matter. But they matter, because they make the difference between floating point and integer, which one has to be used to keep track of the values. - or are they changed one per year?
The reason I didn't mention using integer at all is that only they know whether they can make use of them or have to stick for some unknown reasons to floating-point. Sticking to a single type makes it easier as you don't have to think about what data type you really need at any given point. Yes, type conversation is everywhere, no programmer has to really care anymore, but each command takes time and when you're talking about a real-time game with that amount of objects, each step counts. It's been years, that I took excessive timing tests. I assume they know what works fastest for them.
The problem arise when you don't expect results like 70 - 50 < 20 but rely on the human autocorrection. A so called AI doesn't look at 70 - 50 and says, whatever you're telling me, ALU, it's 20. It doesn't care, the result is < 20 and that's what matters. And all Empires will "see" a LOW while we humans think it's OK. Since we can only argue in a forum about this the Empires are right and we dumdums are wrong - until a dev takes it serious enough.