•1 min read•from Machine Learning
ICML 2026 - Heavy score variance among various batches? [D]
I've seen some people say in their batch very few papers have above 3.5 score, but then other reviewers say that most papers in their score have like 3.75 average.
Why is there so much difference? Is it because of difference in domain? One batch of papers just got harsher reviewers than others? Does ICML account for this?
[link] [comments]
Want to read more?
Check out the full article on the original site
Tagged with
#rows.com
#natural language processing for spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#ICML
#score variance
#reviewers
#papers
#average score
#batch
#domain difference
#harsher reviewers
#submitted papers
#review process
#submission
#peer review
#average
#score distribution
#feedback
#evaluation criteria