All data is processed in time now.
The incident has been resolved by a patch. The underlying issue was an unbounded data query data kept fetching data and forced our containers to be out of memory killed.
The issue has been resolved, and the LLM-as-a-Judge Evaluators are now executed again as expected.
·