IFPA "Official Rules of Footbag Sports" - update f
Short version: "normalize" all the scores before adding them together. This is less useful in circle but should still allow for more precision in comparing relative scores for the same player among multiple judges. Since all winners always get 10, though, it doesn't help as much as in routines where say one judge gives a player 5.7 and the other judge gives the same player 5.0 and then all the others are scored relative to those. So in routines, it's really nice to normalize because you capture the relative difference per-judge in a way that makes the scores comparable between judges. It worked really well when I applied this to the judging results from routines in semifinals. And in that case, Vasek got 2nd very clearly, and Damian got 3rd. The scores were close, but not *that* close; and when you look at how the judges scored each player relative to the others, it actually did seem Vasek did better vs. Damian. But of course if the judges aren't taking care to keep the scores as precise and relative to each other as possible, it doesn't really help to normalize. So .. for this year it isn't usable because judges weren't thinking that much about relative distance between players. But next year for routines at least, we'll do that. In circle, you do already have to give relative scores, so it won't help as much, but it will go a long way to avoid ties if we normalize in both cases.FlexThis wrote:Coin toss?
Steve
Steve Goldberg
Using the built-in Excel normalization function, here're the scores for semifinals in routines (not circle -- I can do those later) just to illustrate what it looks like. Note that the results were identical to the simple sum of ranks method we used (the official system), except there were no ties (i.e., Vasek and Damian's relative rank was easier to calculate):
Pool A
Tomasz Ostrowski 2.01 = 6th place
Jay Boychuk 4.66 = 4th place
Ken Somolinos 6.53 = 3rd place
Evan Gatesman 3.93 = 5th place
Arkadiusz Dudzinski 10.22 = 1st place
Jim Penske 9.10 = 2nd place
Pool B
Zeb Jackson 0.62 = 6th place
Marcin Bujko 8.15 = 3rd place
Philip Morrison 7.37 = 4th place
Toni Pääkkönen 3.87 = 5th place
Gordon Bevier 8.26 = 2nd place
Jan Weber 9.45 = 1st place
Pool C
Matt Kemmer 0.02 = 6th place
Wiktor Debski 2.11 = 5th place
Anssi Sundberg 2.70 = 4th place
Milan Benda 8.69 = 2nd place
Lon Smith 6.47 = 3rd place
David Clavens 10.30 = 1st place
Pool D
Rafal Kaleta 1.19 = 6th place
Tuukka Antikainen 3.20 = 5th place
Michal Ostrowski 3.53 = 4th place
Damian Gielnicki 8.56 = 3rd place
Nick Landes 10.71 = 1st place
Václav Klouda 8.93 = 2nd place
Note: the actual scores are much more fine-grained; after applying the normalization function, for example, Vasek's score was 8.933409804 where Damian's was 8.556448598. So I'm figuring that it's extremely unlikely there'd be a tie given this granularity of scores (post-normalization).
Also, it's important to understand that the way this is done, the scores are not comparable between pools -- only within a single pool. We could conceivably normalize across all pools/players/judges but I'm not sure that gets us much. Just know that a score of 10.0 in one pool is not necessarily comparable to a score of 10.0 in another pool; they may be radically different depending on the judges and their distribution of scores for a given pool.
Finally, I don't claim this is all we have to do, but I do think this is on the right track to resolving these issues (within the context of these simple judging systems). There are of course other ways to address this including full formulaic judging systems etc. We have already updated the existing formula-based system to be more relevant (see the published online rules for freestyle routines) but we still need to work on it some more to make it "right" and fair, and also implementable in polynomial time.
Steve
Pool A
Tomasz Ostrowski 2.01 = 6th place
Jay Boychuk 4.66 = 4th place
Ken Somolinos 6.53 = 3rd place
Evan Gatesman 3.93 = 5th place
Arkadiusz Dudzinski 10.22 = 1st place
Jim Penske 9.10 = 2nd place
Pool B
Zeb Jackson 0.62 = 6th place
Marcin Bujko 8.15 = 3rd place
Philip Morrison 7.37 = 4th place
Toni Pääkkönen 3.87 = 5th place
Gordon Bevier 8.26 = 2nd place
Jan Weber 9.45 = 1st place
Pool C
Matt Kemmer 0.02 = 6th place
Wiktor Debski 2.11 = 5th place
Anssi Sundberg 2.70 = 4th place
Milan Benda 8.69 = 2nd place
Lon Smith 6.47 = 3rd place
David Clavens 10.30 = 1st place
Pool D
Rafal Kaleta 1.19 = 6th place
Tuukka Antikainen 3.20 = 5th place
Michal Ostrowski 3.53 = 4th place
Damian Gielnicki 8.56 = 3rd place
Nick Landes 10.71 = 1st place
Václav Klouda 8.93 = 2nd place
Note: the actual scores are much more fine-grained; after applying the normalization function, for example, Vasek's score was 8.933409804 where Damian's was 8.556448598. So I'm figuring that it's extremely unlikely there'd be a tie given this granularity of scores (post-normalization).
Also, it's important to understand that the way this is done, the scores are not comparable between pools -- only within a single pool. We could conceivably normalize across all pools/players/judges but I'm not sure that gets us much. Just know that a score of 10.0 in one pool is not necessarily comparable to a score of 10.0 in another pool; they may be radically different depending on the judges and their distribution of scores for a given pool.
Finally, I don't claim this is all we have to do, but I do think this is on the right track to resolving these issues (within the context of these simple judging systems). There are of course other ways to address this including full formulaic judging systems etc. We have already updated the existing formula-based system to be more relevant (see the published online rules for freestyle routines) but we still need to work on it some more to make it "right" and fair, and also implementable in polynomial time.
Steve
Steve Goldberg
You are awesome Steve!
Ben Skaggs
Amateurs practice until they can get it right.
Professionals practice until they can't get it wrong.
No, I don't play soccer. Yes, there are competitions. 4 years. Lots of practice.
Amateurs practice until they can get it right.
Professionals practice until they can't get it wrong.
No, I don't play soccer. Yes, there are competitions. 4 years. Lots of practice.
I like it. It sounds similar to refactoring story points in Agile Sprint planning. We compare stories relative to the smallest story. Stories change from sprint to sprint. So the points are only relative within a given sprint, in this case, pool by pool.
The granularity makes it unmistakably clear by the numbers, and seems more legit.
Thanks for this Steve.
The granularity makes it unmistakably clear by the numbers, and seems more legit.
Thanks for this Steve.
Go out and shred already.
~Damon Mathews
~Damon Mathews
Heh, I just didn't waste my time deleting your entry, so it was "0", and as such the normalization function decided to map that to 0.2. Whatever. Not worth worrying about. And as I said, this was just a quick run using the default normalization function. I could probably find a better one and/or audit the data in the macro to make sure there's no noise introduced by scratched players and other data entry problems. But not now. This was just an example to illustrate the point.
Thanks.
Steve
Thanks.
Steve
Steve Goldberg