Just a thought, but some/most of the interesting math problems require an ability to write out proofs. This is something that I loved about Abstract Algebra. Not only is there an ability to come up with a correct answer, but the process of deriving the answer is just as important. Brilliant could accommodate it via peer-reviewed evaluation of submitted proofs. The evaluation could be based on simple criteria that would be scored in binary yes/no in the proof written:
- Conclusion is correct: yes/no.
- Mistakes in any step: yes/no
- Logical flow: yes/no
- Other criteria? Custom criteria? yes/no
To build this, we'd need a way to write and store the proofs, however that architecture already exists.
The bigger change would be with user profiles. Perhaps if you are Level 2 in Induction, then you are able to peer-review a Level 1 or a Level 2 question in that topic. After enough peer-reviews, an average score is determined for the person that submitted the proof. Reviewers are rewarded by earning points or badges, similar to how StackOverFlow works.
I think it can be done.