Thu. Jul 29th, 2021
    Ron Millow, jury Free TON

    In the Review of the Jurors Efficiency Analysis Contest, we have already examined the works of the winners and the main proposals of the contestants to improve judging. To find out the opinion of the judges themselves, we asked Ron Millow (Chief Business Development Officer TON Labs) to share his point of view on the problems and prospects of the Free TON judging system.

    • In your opinion, how serious is the problem of jury effectiveness in Free TON?

    First of all we need to define which jury. I can only speak for the global governance jury. Each sub-governance has their own tale; for instance, recently there was a an incident uncovered by the Analytics & Support sub-governance regarding possible trickery in voting within the Web & Design sub-governance. Things like that are awful if found to be true; however, so far it seems that it was only conjecture based on soft facts and analytics that were strictly black-and-white, totally missing the grey areas that in essence partially debunked this analysis. Nevertheless, if and when bad things happen, immediate rectification. It would be very difficult for me to go in-depth through every single sub-governance here and now; respectively, I will only speak for global gov if that’s alright.

    In my opinion, the current effectiveness, or lack thereof depending on how one sees it, and from whichever vantage point, is heavily being shifted toward the jury without considering that people will always complain. There is no avoiding this.

    It’s not so much that there are significant problems with voting, albeit there are certainly several, no doubt about that. I simply question the extent and acuteness of the problems. I just feel that maybe their significance is overblown due to the fact that money is involved.

    I would challenge those who see issues with voting from the perspective of an honor-based system, that is, for just one moment, might they project themselves into a pretend world where those same contests were being judged on merit only, and for little more than honorary rewards vs. monetary ones. If it is psychologically possible to imagine such a scenario, I would challenge those with the loudest complaints to ask themselves if they would still feel the same way. This is the true litmus test of effectiveness.

    That being said, obvious mistakes were and continue to be made and they need to be made public, but only in the interest of further perfecting the mechanics, not “fixing” things in hindsight. That is ineffective. In a phrase, I don’t feel it is serious at all. That would be like saying that any system based on trial and error is poor. If that were the case, we wouldn’t have the automobile, the airplane, the computer, the internet, the blockchain, or anything of innovative value for that matter.

    It’s a learning process and pitfalls are a part of it. There is no problem with the process. The only problem is upgrading it efficiently and quickly enough when those problems arise by challenging and solving them, i.e., forward not backward, which will lead to a better system.

    Ultimately it is all about best practices in an environment that has never been tried before on this scale. That’s actually the story of Free TON. If anyone wants to argue that the innovations I mentioned above, including the innovation that is Free TON are somehow problems, then I would challenge any such person to a debate. Many folks do not understand how breakthroughs come about. Everything on this scale will always be the result of one or another process or set of processes that make up an ever evolving system. Each requires an insurmountable amount of work, complete with unavoidable ups and downs.

    We just need more community members to contribute ideas and solutions. People are people. Work is work.

    • What are the weak or controversial aspects of the current judging system?

    The primary weak point is a lack of automation of the voting process on-chain. Poor judgment is a direct result of the network having not yet achieved automated decentralization, i.e., governance 2.0. While we will obviously always want things to work out as close to perfect as possible and constantly strive to achieve that, until everyone in the world can vote with their tokens these problems will persist. To achieve this effectively we need to collect best practices. In other words, these issues are valuable, and moreover necessary. Still, we are collecting these best practices to create the Ferrari in an otherwise horse-and-carriage world.

    As for controversy — controversy is healthy. It sparks debate, followed by attention by a larger audience (squeaky wheel gets the grease so-to-speak), followed by active contribution to extinguish said controversy via solutions, etc. Isn’t that the very essence of this Q&A? Controversy — along with necessity — are the parents of innovation, so I don’t care and won’t dwell on controversy. I embrace it.

    Please also keep in mind that SMV (soft majority voting) is a critical part of governance 2.0, and that is to be finished very soon. It will solve the bulk of these problems once it’s part of a smart contract.


    • What should be the optimal judging system for Free TON?

    I don’t know what you mean by optimal. If you mean perfection then there’s no such thing. Optimal is an ever growing, continuously challenging pursuit toward “better than before.”

    By the way, not even governance 2.0 can ever be perfect. People aren’t perfect and so by virtue neither can any system be perfect. That being said, if by optimal you mean the best it can be right here and now at this very instance in time, then the answer is that it is as optimal as it can be at this very moment. Tomorrow it will be more optimal, and then the next day, and the next day, and etc.

    What is the optimal political system? The answer is — something else. Something better. Always something better. Free TON is something much better, as is its voting system. But tomorrow it will be even better. 

    In the meantime we deal with things on a human level, making the process better and better in response to feedback and the issues you touched on in the first 2 questions. Like the scientific method, you have to go through the steps from hypothesis to conclusion to new hypothesis. We simply have to take the long road. Like the quote goes, “There is no elevator to success. You have to take the stairs.” All world-changing events are predicated on this.  Everything else are puppies, kitties, unicorns and rainbows.

    • Does the current judging system need to be reformed? What do you think should be changed in judging?

    Does it need to be reformed? Yes. It is constantly reforming. But I don’t like the word “reformed” here. As I said earlier, it needs to evolve. Evolution of the voting process is the crux.

    The voting system — and I want to underscore this — is not broken, per se. It is just young. Learn to cry, learn to crawl, learn to walk, learn to run, learn to fly, learn to fly to the moon. I don’t want to bombard you with metaphors, but the fact is that these metaphors apply. If I were to judge (no pun intended) as to where the voting process is now, we just learned to walk, and so when the sidewalk is icy and the night is dark, bones can be fractured. This is unavoidable. To get to the moon means more people need to contribute their technological knowhow.

    As for what should be changed, I already answered that:

    1. best practices need to be collected and analyzed, 
    2. best solutions through contests need to be proposed based on those best practices, 
    3. best approaches by virtue should be implemented (also through contests), 
    4. draw conclusions, 
    5. repeat steps 1 through 4 ad nauseam, ad infinitum.

    That is the goal, and it trumps any other system ever invented.

    • Could you give an example of a contest in which the judging was at the highest level? Why do you think so?

    Wow, that’s a very good question. I cannot speak to technological contests, because I feel very strongly that it is probably one or several of the more tech-slanted contests that have likely yielded the best results, particularly those in some of the more tech-focused sub-governance groups. Reason being, technology, while prone to argumentation, still has a very defined set of parameters in most cases, and as such, a solution is a solution. With regard to more referral giver types of contests, i.e., humanitarian and/or promotional if you will, the quest for best is far more subjective. What’s the best meme? What’s the best gif? What’s the best landing page? What’s the best essay?

    These things have a way of asking the juror to inject a piece of him or herself into judging. So long as the requirements described in the contest are met at their bare bones minimum, who is to say what’s best? In most cases like these it is the requirements drafted into the contest that need work, not the judgment of same.

    There are cultural, personal, learned experience-based, as well as biased elements at work in such contests. Trying to somehow regulate those is methodically in stark contrast to the whole premise of Free TON, which is free speech and the right to have an opinion.

    In a non-tech contest, barring any possible oversights of the requirements, the best example of a successful submission is the one that any individual juror thinks is best based on their own preferences. There is no way to control that, nor should there be! The only solutions are a better and more automated voting system, more jurors, and more contests. This is a very important concept. I like blood sausage. That might make you sick to your stomach. Who’s right? We are both right. On the other hand we both are wrong if each of us insists that he or she is more right than the other. This is where volume of opinion comes into play.

    The right to choose is at the core of this question. Not to be repetitive, but the only way to isolate the winners and thereby determine which contest is best is to reach governance 2.0 where every person on the planet with TONs in their wallet can vote. Only then will the true winners be determined.

    As for my personal opinion on which contest is best, my answer is each and every single one, but particularly those that have been deemed horrific failures. Why? For the very reason I described. Because they teach.

    • Could you give an example of the most controversial contest in terms of judging? Why do you think so?

    Most controversial… hm… That’s a tough one.

    If I had a gun to my head, then I would have to call it a tie between the Virtual Hero and the Holiday Greeting Card contests.

    The greeting card debacle I do not want to get into now, because the details are still being debated, but I will comment on the hero contest.

    Proposal fault:

    Not because anyone necessarily did anything (technically) wrong, but because the requirements were, while well defined, perhaps in need of better stress of the gist as to what is being asked of the contestant, and probably in a much shorter, more succinct way. Complexity in non-technical contests that are designed with a sort of whimsical or fun element built in, well, they should be short and sweet with the 1 or 2 primary requirements having been well thought-through and delivered face-first. This is not easy, but when the list of requirements reads like a legal contract with 25 bullet points, people just revert back to being human and cannot take the time to study and analyze all of them to their subatomic specifics. Asking people to do so is futile. Again, in this case the only remedy is a world full of voters, which we do not have yet, but certainly will soon. I think that contest could have been written in 3 short paragraphs. That is of course just my personal opinion.

    Contestant fault:

    Give or take, the overwhelming bulk of the contestants who made submissions didn’t bother to understand the requirements at all. They took it as a design contest, which it was not. In fact they really didn’t even need to work on the character design all that much. There really was no eye candy requirement. They could have just as well submitted a specs document with all of the character mechanics and the hero’s functionality, as described in the contest, and won based on a good description with all of the requisite nuances for how to bring the hero to life. Very few did that. In fact I believe only a small handful out of 122 entries did that.


    Simple. Just like the contestants, the jury should have read the requirements and understood them. Many failed to do that. Instead most chose to just judge a design contest, which again this was not.

    So, now, who do we string up and sacrifice in the interest of betterment? Where’s the proverbial witch? There isn’t one. 

    There is of course a challenge mechanism in place. Put together the statistical data and propose to have certain jurors who made truly awful decisions removed. Put it to a vote. Let the community decide.

    These are my personal points of view. This is how I see it.


    Having answered all of the questions above, I do want to add 3 comments.

    1. Judging is very hard and it takes days sometimes. When I say days, I mean full-time work kind of days. In order to expect full time job level results, the length of voting time needs to be increased as does the number of jurors. Thankfully these options exist. They just need to be exercised. As to increasing time, the burden falls not on the jurors but on whoever is drafting the contest proposal. As to the number of jurors, that burden falls on the community to make a proposal for yet another jury selection contest.

    2. People win and people lose. This is freedom at its core. When 25% of the unhappy voices are louder than the 75% who won, we as human beings tend to focus on the screamers. This is natural. This is how we are built. I think we are all victims of our own imperfections to focus on those who are screaming. It gets our attention. And yes, sometimes with merit. Sometimes!

    Whether there is a reason to scream or not can only be:

    • determined after analysis;
    • remedied only afterward through a proposal.

    3. If the analysis comes back to justify all of the yelling, then this is a great find. It should get attention and be dealt with. For this we have an Analytics & Support sub-governance, and that is where the facts are gathered, the results of which should be presented, objectively, for the community to later determine whether or not all the screaming and yelling has any merit to begin with. On the other hand, if all of the screaming and yelling turns out to be, well, just screaming and yelling because someone lost — and someone always has to lose — then in that case what we are really dealing with is a few people who are really angry and want to blame anyone other than themselves for their errors. In a few rare cases, some contestants in my opinion were absolutely convinced of their own self entitled “genius” and felt that the mean ole jury somehow overlooked their gift to the masses. Again, we are dealing with living, breathing human beings. We are all imperfect.


    Barring any exceptional cases, and I am only providing my subjective opinion, so far I have found that the overwhelming bulk of complaints actually have little or no justification whatsoever other than opinions.  The few that have egregiously violated the judgment process, well of course those should be fixed.