Apple Intelligence Adheres to New AI Safeguards

Apple, along with other major tech companies, has agreed to a new voluntary initiative by the Biden administration to implement artificial intelligence (AI) safeguards.

iOS - 26-07-2024 03:59

The guidelines emphasize fairness in AI model development and the monitoring of potential security or privacy issues.

Key Tenets of the Executive Order:

Developers of powerful AI systems must share their safety test results with the U.S. government. Standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy. Measures will be taken to prevent the use of AI in engineering dangerous biological materials. Standards and best practices will be established to protect against AI-enabled fraud and deception. An advanced cybersecurity program will be developed to find and fix vulnerabilities in critical software. A National Security Memorandum will direct further actions on AI and security.

Under this executive order, companies are asked to share compliance testing results with each other and the federal government. A voluntary security risk assessment is also included, though there are currently no penalties for non-compliance and no enforcement framework in place. Additionally, AI systems must be tested before being eligible for federal purchase.

While the guidelines have been set, their compliance and enforcement remain unclear, particularly under future administrations. Despite bipartisan efforts to regulate AI development, progress has been slow, and further debate is unlikely before the November 2024 elections.

The White House plans to hold a briefing on July 26 to discuss the initiative further.

MOST READ