Anthropic, an A.I. Company, Urges Job Applicants Not to Use A.I.

"We want to be able to assess people’s genuine interest and motivations for working at Anthropic," a company spokesperson told Observer.

A man in a white shirt and navy blue suit.
Co-founder and CEO of Anthropic, Dario Amodei. Chesnot/Getty Images

Anthropic, a rapidly rising OpenAI rival, is the company behind Claude, an A.I. assistant that cuts through grunt work, brainstorms ideas and produces images and text. But just don’t ask it to apply to a job at Anthropic. In an ironic twist, Anthropic is urging potential candidates to refrain from using A.I. when applying to positions at the company. The stipulation, which Anthropic refers to as its “A.I. Policy,” applies to seemingly all of its roughly 150 open roles.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

“While we encourage people to use A.I. systems during their role to help them work faster and more effectively, please do not use A.I. assistants during the application process,” reads the company’s policy, which was first noted by open-source developer Simon Willison. “We want to understand your personal interest in Anthropic without mediation through an A.I. system, and we also want to evaluate your non-A.I. assisted communication skills. Please indicate ‘Yes’ if you have read and agree.”

This policy has been included in Anthropic’s open roles as far back as May 2024, according to archived copies of the company’s former job postings. It is mentioned in job descriptions across research, communications, finance and even security and for roles located in cities like San Francisco, New York City, Seattle, London, Dublin and Zurich.

“We want to be able to assess people’s genuine interest and motivations for working at Anthropic,” said the company in a statement to Observer. “By asking candidates not to use A.I. to answer key questions, we’re looking for signals on what candidates value and their unique answers to why they want to work here.”

The policy specifically applies to an application question that asks candidates: “Why do you want to work at Anthropic?” The company notes that responses to this section typically range from 200 to 400 words and are valued “highly.”

Anthropic’s Claude and other A.I. tools like OpenAI’s ChatGPT are widely used in job applications. In a recent survey of more than 3,000 job hunters, more than half said they used A.I. tools to help search for open positions, polish resumes and even write cover letters, according to a report from Capterra. Of those using A.I. in their job search, 83 percent said they used it to exaggerate or lie about their skills during the application process.

Anthropic isn’t the only company attempting to crack down on the use of A.I. in job hunting. Around 53 percent of hiring managers said receiving A.I.-generated content would give them reservations about an applicant, according to a survey from Resume Genius, while 20 percent said it could prevent them from hiring a candidate.

Anthropic, founded by the former OpenAI executive Dario Amodei, is a rising star in Silicon Valley, having raised more than $10 billion in funding from tech giants like Amazon (AMZN) and Google (GOOGL). It is reportedly in talks to raise $2 billion in a new funding round that would value the four-year-old startup at $60 billion.

Anthropic, an A.I. Company, Urges Job Applicants Not to Use A.I.