Anthropic's Powerful AI Model Secures Software, Raises Familiar Concerns

Anthropic is pitching Project Glasswing as a cybersecurity breakthrough, but the first battle may be getting everyone to believe in the legend of Claude Mythos| Business News

Image source: Internet

Anthropic's AI model, Claude Mythos, has been touted as a game-changer in cybersecurity, but its development and deployment raise familiar concerns about the responsible use of powerful AI.

The model was 'leaked' a few days ago, but Anthropic has since revealed that it will not be made generally available due to its exceptional capabilities. Instead, it will be used as the foundation for Project Glasswing, an industry consortium aimed at securing software.

The partners include Apple, Google, Microsoft, Amazon Web Services, Nvidia, and Cisco, and they will use the model to find vulnerabilities in software and identify zero-day vulnerabilities. Anthropic will share what it learns with the industry, and the company is committing up to $100 million worth of usage credits for the Mythos Preview.

The model has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. However, the partners are not handing over their entire cybersecurity apparatus to the new model, but rather using it as an additional layer.

Anthropic's positioning with Claude Mythos for Project Glasswing is two-pronged, aiming to improve cybersecurity applications and identify zero-day vulnerabilities. The company's Frontier Red Team has tested the model and found that it significantly outperforms previous models in autonomous exploit development.

While the industry may benefit from the model's capabilities, the narrative surrounding its development and deployment raises concerns about the responsible use of powerful AI. The 'leak' and subsequent selective deployment via a consortium lend credence to the projection of responsibility, but more needs to be done to ensure that these models are used for the greater good.