As far as policy guidelines go, I think they’re quite reasonable. I read the entire policy and the staff comments on the page.
They lay out expectations: Humans must write the articles and verify everything that an AI output claims comes from a source such as through transcribing interviews or summarizing documents. They lay out that not following the policy is a violation which leads to the authors or editors being on the hook for those failures, likely tied to Ars or Condé Nast’s disciplinary procedures. These would be the reactive controls through accountability for failures, which there may yet be.
Per the document and the subsequent comments, they confirm that editors and reporters both have to verify that reporting is accurate. That is a reasonable amount of proactive controls. If there becomes a pattern of failures either in the amount of them or a lack of accountability, then it would be fair to assume their output is AI slop, but I think that’s currently too early to claim.
You’re under no obligation to assume they’ll be successful or that they are sincere, but it’s a clearly written reader-facing policy.
As far as policy guidelines go, I think they’re quite reasonable. I read the entire policy and the staff comments on the page. They lay out expectations: Humans must write the articles and verify everything that an AI output claims comes from a source such as through transcribing interviews or summarizing documents. They lay out that not following the policy is a violation which leads to the authors or editors being on the hook for those failures, likely tied to Ars or Condé Nast’s disciplinary procedures. These would be the reactive controls through accountability for failures, which there may yet be.
Per the document and the subsequent comments, they confirm that editors and reporters both have to verify that reporting is accurate. That is a reasonable amount of proactive controls. If there becomes a pattern of failures either in the amount of them or a lack of accountability, then it would be fair to assume their output is AI slop, but I think that’s currently too early to claim. You’re under no obligation to assume they’ll be successful or that they are sincere, but it’s a clearly written reader-facing policy.