The International Joint Conference on Neural Networks (IJCNN) 2025 faced an unprecedented : handling 5,526 paper submissions, a 100% increase from the previous edition, with 12,407 authors from 68 countries. This massive scale required a meticulously organized review process involving 7,877 active reviewers and 426 area chairs, who collectively produced 18,996 reviews. The conference management system CMT was chosen for its scalability and security, enabling a hierarchical structure where three technical program chairs coordinated area chairs, each overseeing about 15-20 papers. With an average of 3.44 reviews per paper, the process aimed to ensure fairness despite the sheer volume, highlighting the growing demands of academic conferences in the AI era.
To manage reviewer assignment and mitigate misconduct risks, IJCNN 2025 implemented a multi-criteria approach. Reviewers were sourced from past conferences, principal authors of submissions, and volunteers, with 7,736 reviewers completing at least one review. The assignment process weighted three factors: reviewer bids (20%), Toronto Paper Matching System scores (30%), and subject area relevance (50%). This strategy helped address issues like the 'lone wolf' effect, where author-reviewers might manipulate scores, by assigning at least one author-reviewer per paper. The hierarchical organization allowed timely completion, with only 2.2% of papers receiving fewer than three reviews, showcasing efficient logistics in a decentralized framework.
The scoring system for papers was detailed and structured, using quantized metrics to evaluate quality. Reviewers assessed criteria like relevance, technical quality, novelty, and presentation, each weighted on a scale from -1.0 to 1.0, alongside reviewer confidence and overall recommendation on a 7-point Likert scale. Scores were aggregated using weighted means based on reviewer confidence, then normalized to a 0-100 range. A harmonic mean combined reviewer and meta-reviewer scores to create a final score index, with the top papers selected to meet a 40% acceptance rate—resulting in 2,152 accepted papers out of 5,526 submissions, or 38.94%.
An experimental calibration procedure was introduced to remove reviewer-specific biases, a common issue in subjective evaluations. This approach, inspired by s used in NeurIPS conferences, modeled scores as a sum of paper quality, reviewer bias, and subjective error, using Gaussian distributions to estimate parameters. By dequantizing scores and applying Bayesian techniques, the process adjusted for miscalibrations, such as varying interpretations of rating scales. The calibrated scores helped resolve gray-area decisions, particularly for papers with 'Weak Accept' ratings, ensuring a more objective ranking alongside traditional metrics.
Despite these innovations, the review process faced limitations, including the computational burden of calibrating over 43.5 million parameters and the subjective nature of peer review. The conference organizers developed Python scripts to handle data exported from CMT, making tools available online for future use. As AI conferences grow, s like detecting AI-generated text or inconsistent reviews remain, pointing to potential improvements with LLMs or cosine similarity metrics. This experience underscores the evolving complexity of academic peer review in scaling neural network research.
Reference: Scarpiniti, M., Comminiello, D. (2026). The IJCNN 2025 Review Process. arXiv:2603.19244v1 [cs.DL].
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn