Rules

  • Access to Data: Participants may freely use pre-trained weights obtained from open-source datasets or models to initialize their target models. This practice is allowed to facilitate faster development cycles and to encourage innovation in model architecture and algorithm design.
  • Use of Pre-trained Models: Participants may freely use pre-trained weights obtained from open-source datasets or models to initialize their target models. This practice is allowed to facilitate faster development cycles and to encourage innovation in model architecture and algorithm design.
  • Registration Policy: Each advisor is permitted to mentor only one team. The registration of fake teams to bypass submission limits is strictly prohibited. Any violations will result in immediate disqualification from the competition and prizes.
  • Restrictions on Data Usage: To maintain the integrity of the challenge, participants are explicitly prohibited from using additional private or in-house datasets to pre-train their models. Compliance with this rule ensures that all participants are evaluated on an equal footing.
  • Submission Limitations: Participants are restricted to a maximum of 3 submission attempts per day for Track 1, and 1 submission attempts per day for Track 2. Each submission will consist of the model’s predictions on the test partition, along with the corresponding model and checkpoints/weights.
  • Evaluation Metrics: The organizers will evaluate the submitted models using Mean Squared Error (MSE) for Track 1 and Accuracy (Acc) for Track 2. The metric will be computed by comparing the participants’ predictions against the ground-truth labels.
  • Final Ranking: The final test results of each team will be used for the final ranking in the challenge. This ensures that the models are thoroughly tested and their performance is accurately assessed.
  • Reproducibility: All materials, including data, must be accessible to all participating teams to ensure that the models submitted by participants are fully reproducible.
  • Organizer’s Role: The organizers will not actively participate in the challenge but will undertake a re-evaluation of the findings from the best-performing systems in each sub-challenge to validate the results.
  • Eligibility: The challenge is open to any researcher from any organization based anywhere in the world, fostering a global collaborative environment in addressing the complex issue of asynchronous video interviews.
  • Presentation policy: ACM Multimedia 2026 is an on-site event only. This means that all papers and contributions must be presented by a physical person on-site; remote presentations will not be hosted or allowed. Papers and contributions not presented on-site will be considered a no-show and removed from the proceedings of the conference. More details will be provided to handle unfortunate situations in which none of the authors would be able to attend the conference physically.

  • Paper Recommendation Policy

    • Automatic Qualification: The first-place winner of each individual track will automatically receive a recommendation for a Main Track paper submission.
    • Third Recommendation Slot: The third recommendation slot is awarded to the team with the highest Composite Score, which measures the relative performance gap between a team and the winners of both tracks (excluding the track winners themselves).
      Score Calculation Formula:

      $score = 0.5 \times \frac{mse_1}{mse_i - mse_1} + 0.5 \times \frac{acc_1}{acc_1 - acc_i}$

      Example Calculation:
      Assume $mse_1 = 0.1700$ and $acc_1 = 0.9700$.
      For Team M ($mse_M = 0.1900, acc_M = 0.9500$):
      $Score_M = 0.5 \times \frac{0.1700}{0.1900 - 0.1700} + 0.5 \times \frac{0.9700}{0.9700 - 0.9500} = \mathbf{28.5000}$

      *All values for Score, MSE, and Accuracy are rounded to four decimal places.
    • Tie-breaking Rules: In the event of a tie in the composite score, the following criteria will be applied sequentially:
      1. Average Rank: The lower the average rank across both tracks, the higher the standing.
      2. Submission Count: The team with fewer test set submissions will be ranked higher.
      3. Submission Timestamp: The team that achieved their best result earlier (upload time) will be ranked higher.