I think it's important to note that given there were 5 ranking points difference and 6 judges, it's probably that at least 2 of the judges (and possibly 3) did not rank Damian first, so I don't think explanations from particular judges is meaningful. Nor should this be about targeting and blaming the particular judges that did rank Damian first. I absolutely accept their decision and would completely oppose changing it.
Of course there are a number of issues and despite those the opinions of those that don't think discussing things is helpful, I think working out what those issues are is very important for footbag.
If the pro-Damian people and prepared to actually put forward their opinions, then I don't think the debate about which routine was best can go much further. Having reviewed the videos a few times I think it was very close between Sergey and Vasek. I might even lean slightly towards Sergey, as he probably had better variety than anybody else, strong choreography and high difficulty. Unfortunately the technical peak of his routine was at the start so by the end of the routine this is less in anybodies mind.
I think the main issue I've taken from the debate is this;
There is clearly no consensus within the footbag community about what routines should be judged on and how to quantify the skill level of a routine.
Despite the comments by a number of people, the rules used for judging routines (which I don't think are publicly published anywhere due to not yet being approved by the IFC) do not answer this question at all. Here is the relevant draft section;
505.02. Judging formats:
505.02.01 . Artistic Technical Voting Format:
505.02.01.01. Judges:
A panel of judges evaluates each routine. Each judge shall be
independent of the other judges and shall not discuss decisions until
after the results have been submitted. The desired amount of judges is
4 to 6. It is advised that judges are experienced with the system and
with freestyle footbag. Judges are not allowed to compete in the pool
or final that they are judging and it is recommend that they do not
compete in the event if possible.
505.02.01.02. Judging Aids:
A panel of judging aids shall assist the judges if desired. This shall
comprise of the following:
a) Runner: relay information and results between the judges and the
appropriate people.
b) Time keeper: record the time taken for the routine
c) Drop counter: record the number of drops compiled in the routine
d) Add counter: record the total amount of adds compiled in the routine
e) Contact counter: record and total the amount of contacts in the routine
If capable – it is allowed for aids to fill more than one positions.
Aids are not allowed to compete in the pool or final that they are
involved with and it is recommended that they do not compete in the
event if possible.
505.02.01.03. Judging Scores:
The judges shall judge the routines based on two criteria:
a) Technical Merit:
Technical merit refers the difficulty of the tricks and combinations
performed by the competitor, the variety of tricks and combinations
demonstrated, the execution of moves and combinations and the general
form demonstrated.
b) Artistic Merit:
Artistic Merit refers to artistic variety of the tricks and
combinations performed by the competitor, the relationship between the
competitor and the music, including the timing of moves and music, the
rhythm of the routine, choreography, the start and finish of the
routine, the use of space, time, environment and music, the
competitor's appeal to the audience, the artistic impression,
including style and originality and the overall impression of the
routine.
Each judge shall award a score from 0 to 6 for each criteria taking
into account the following guidelines:
a) Technical merit:
0.0 = extremely poor
1.0 = poor
2.0 = weak
3.0 = reasonable
4.0 = good
5.0 = very good
6.0 = perfect
b) Artistic merit:
0.0 = extremely poor
1.0 = poor
2.0 = weak
3.0 = reasonable
4.0 = good
5.0 = very good
6.0 = perfect
Note that judges can award scores of any decimal place that they feel
is necessary although are expected to show common sense.
505.02.01.04. Ranking Players:
Each player shall be ranked on each individual criterion with the
player receiving the highest score for the criterion being placed in
first position etc. by each individual judge. The rank position given
by each judge will then be totalled together to give each player a
final score. Players are then ranked from lowest score to highest
score. The player with the lowest score is awarded first place.
In the result of a tie the individual judging ranks are analysed and
the player with the highest quantity of ranks at the lowest value of
the tied player shall be awarded the higher finish of the tied
players. Players with identical rank scores shall be mutually be
awarded the position.
As you can see, there is no mention of drops and just some very broad definitions.
My vision when I took on the task of updating the rules was to begin with simply having the rules reflect the actual rules used in competitions without putting in any of my personal opinions (which I thought would be an easy task, but took 2 years before I gave up), and then to make them far more specific so that no interpretation of the rules would be necessary.
Clearly this still needs to happen and regardless of whether the results at this years worlds are correct or not, the fact that so many people disagree with the decisions made shows how necessary it is to clarify the rules and to be more specific.
Regardless of how people feel, the opinion that the judges did or did not make a mistake is subjective. It depends completely on unwritten expectations of what a good routine is, and what the significance of drops are in a routine. What we need to aim for is a set of rules where you can look at these routines and have a clear understanding of why each person finished in the position they did, and can make an objective decision about whether the judges made the right decision or not.
The other issue, which I think comes up every world championships, is the issue of transparency in judging. In most sports and subjectively judged competitions the judges put out short reports on each performance explaining their score. I know judging is hard work with little reward but I think people judging need to be people who want to see the best results possible, and I don't think writing up 50 to 100 words on each performance is asking a lot. If you think it is, just be glad you don't judge school eistedfords where you not only have to sit through 4 to 10 hours of kids trying to play music every day for 2 weeks, but also have to write up justifications for every score you give, and you do it as a volunteer.
Sorry to sound a little corporate but instead of saying; "this is too hard," we should be saying; "what would a perfect system be? What can we do to move towards that?" Of course we can't achieve perfection, but we can certainly move closer and closer towards it - but only if we have open discussions and always aim to improve.