Skip to main content

Apple joins AI Safety Institute Consortium (AISIC) at request of White House

Apple is one of more than 200 companies and other organisations to join the US AI Safety Institute Consortium (AISIC), at the request of the White House.

Amazon, Google, Meta, Microsoft, and Nvidis are among the other companies to join the consortium in response to an executive order by President Biden to ensure that artificial intelligence is “safe, secure, and trustworthy” …

Executive order on AI safety

AISIC was founded in response to a White House executive order issued in October of last year. It set out a whole raft of demands intended to ensure that US citizens were protected from the potential risks of AI systems. These include:

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. 
  • Protect against the risks of using AI to engineer dangerous biological materials.
  • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.

AI Safety Institute Consortium (AISIC)

The consortium was created to help businesses, academics, and government agencies to work together to achieve these goals, reports Reuters.

The Biden administration on Thursday said leading artificial intelligence companies are among more than 200 entities joining a new U.S. consortium to support the safe development and deployment of generative AI. Commerce Secretary Gina Raimondo announced the U.S. AI Safety Institute Consortium (AISIC) […]

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” Raimondo said in a statement […]

The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a “new measurement science in AI safety,” Commerce said.

One agreement has already been reached, in an effort to stem the use of fake images generated by AI. Major companies behind generative image apps agreed to include digital watermarks in AI-generated imagery so that it can be easily identified as such.

Apple’s AI work has been rather leisurely paced when it comes to Siri, and it’s likely this is in large part due to the company’s concerns about generative AI’s tendency to ‘hallucinate’ – that is, make completely false statements.

Photo by Igor Omilaev on Unsplash

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear