Template talk:DropTest/1

From Star Trek Timelines Wiki
< Template talk:DropTest
Revision as of 15:43, 14 October 2018 by Crunch (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The computation of the high and low arguments passed to Template:DropTest/4 doesn't seem to make sense to me when I do some dimensional analysis. Within both computations is the term {{{runs}}}*{{{rewards}}}-{{{2}}}/{{{3}}}. Here is what I can gather based on the chain of calls from Template:DropTest to Template:DropTest/3 to Template:DropTest/1:

  • {{{runs}}} is the total number of runs recorded for the mission,
  • {{{rewards}}} is the number of reward nodes passed in all recorded mission runs,
  • {{{2}}} is the number of copies of the item obtained in all recorded mission runs, and
  • {{{3}}} is the quantity of the item that can be obtained per reward node.

Then, the {{{2}}}/{{{3}}} part makes sense: it's the number of reward nodes that dropped the item. But {{{runs}}}*{{{rewards}}} is an odd quantity that I can't make sense of. The number of runs times the number of reward nodes passed? Moreover, subtracting the two quantities doesn't seem to type check.

I imagine the intent is to take the standard deviation across all reward nodes. That is, each reward node passed is a sample. If so, it seems that {{{rewards}}} may have been misinterpreted as the number of reward nodes per mission run?

--Tribble (talk) 04:10, 17 May 2017 (CDT)

  • I am not going to study my code to figure out why I did it, but it was probably a mathematical simplification. Like instead of writing X^2 * Y / X, I may have simplified it to X * Y... something like that anyhow in order to reduce server load. CodeHydro (talk) 17:12, 17 May 2017 (CDT)
Mathematical simplication is fine, but it shouldn't break dimensional analysis. To try to clarify, here are what I believe to be the dimensions of the four variables:
  • {{{runs}}} is a quantity of mission runs (runs for short),
  • {{{rewards}}} is a quantity of reward nodes (nodes for short),
  • {{{2}}} is a quantity of items (items), and
  • {{{3}}} is a quantity of items per node (items/node).

Then {{{runs}}} * {{{rewards}}} has the dimension runs × nodes, but {{{2}}} / {{{3}}} has the dimension items / (items/node), which reduces to just nodes. Subtracting nodes from runs × nodes seems broken to me. It's like subtracting 5 inches from 2 ounces---it just doesn't make sense.

(I do understand, btw, not wanting to study this code. It's a shame that MW templates is such a write-only language.)

--Tribble (talk) 17:46, 17 May 2017 (CDT)

While I admire that you took the time to examine the source, the reason I'm not examining my own source is because I wrote the thing over a year ago and because I know that I analyzed my formula thoroughly back then.

In any case, just because you can't see the logic doesn't mean it isn't there. To prove the validity of my formula, I made a spreadsheet here which randomly generates simulated runs. I even wrote a convenient export for you in cell I5. Paste that export into any page on this wiki can you will find the the output in cells F3 and G3 invariably match the values you see when you hover over the chronitons/unit column:


Mission tested: [[]] Normal
By: anonymous-unreliable
Date(s): [Missing]
Runs: 41   Cost/Run: 1000 Chroniton
Item Units Chroniton / unit Runs/Drop
Common 98>>

Common

Average (mean) runs per drop: 1

Based on test averages, to get one more drop,
you may need to do another:

  • 1 runs in 50 percent of cases (median)
  • 1 runs in 10 percent of cases
  • 1 runs in 1 percent of cases

That is, 1 in 100 players may not see this item
drop at all even after 0 runs.

Also, a run dropping only this item is expected
per 2 runs or so.

418.4Statistical Strength: Very reliable

Range of average cost per
Common
within 2 standard deviations
(~95.5% confidence):

383.31 — 460.48
>>
0.4
Rare 25>>

Rare

Average (mean) runs per drop: 2

Based on test averages, to get one more drop,
you may need to do another:

  • 2 runs in 50 percent of cases (median)
  • 4 runs in 10 percent of cases
  • 7 runs in 1 percent of cases

That is, 1 in 100 players may not see this item
drop at all even after 6 runs.

Also, a run dropping only this item is expected
per 120 runs or so.

1640.0Statistical Strength: Fairly reliable

Range of average cost per
Rare
within 2 standard deviations
(~95.5% confidence):

1207.21 — 2556.52
>>
1.6

Average cost per unit assumes 3 standard rewards per run.

Please Do NOT include results from "Double-Up" Adwarps.
   Double-Up results are not random and will skew results.

Spreadsheet says Cost/Unit range +/- 2 SD:

  • Common: 383.31 - 460.48
  • Rare: 1207.21 - 2556.52

Remember, I didn't even look at DropTest's code when making this... you can doubt my formula all you want, but are you going to doubt the engineers at Google's ability to write the STDEVA formula? CodeHydro (talk) 22:03, 21 May 2017 (CDT)

Thanks for taking the time to create that spreadsheet. I was originally looking at the source not because I thought there was anything wrong with it, but to understand how the quality of the drop data is being determined. It was through the course of figuring that out, that I came across what seemed like a bug. The spreadsheet does clarify things for me, and I now realize that {{{rewards}}} actually is the number of reward nodes per mission run, like I thought it should have been in my original comment. It turns out I had missed the /({{{runs}}}) on line 12 of Template:DropTest. Things are making sense now. Thanks again for your time, and sorry to have been a bother. --Tribble (talk) 00:52, 22 May 2017 (CDT)
No problem. I remember looking at that same line about a month after I wrote it and thinking "What the...?" (when I was adding the tooltip feature). So I wasn't really surprised at all. In fact, when coding it and seeing how many mathematical leaps of intuition I was making, I anticipated that someone someday would question where the heck I got those formulas. Thanks for giving me the chance to show off my math ninja skills ;) CodeHydro (talk) 10:00, 22 May 2017 (CDT)
I simplified the SD formula a bit (by doing some formula transformations), getting rid of the ^2's, and two of the four square root computations. This resulted in a speedup of about 10% on heavy pages such as Casing. I won't say it is now simpler to reconstruct where these formulas came from - but it IS less expensive to evaluate ;-) --Crunch (talk) 16:43, 14 October 2018 (CDT)