OpenAI Takes Its First Step into Cybersecurity Investment

OpenAI Takes Its First Step into Cybersecurity Investment

The Rising Threat of Generative AI in Cybercrime

Understanding Generative AI’s Capabilities

Generative AI has introduced a new set of tools that can be exploited by cybercriminals. Today, it’s possible to carry out actions such as deepfaking executives or creating fake documents with astonishing accuracy. These advancements pose serious risks for individuals and businesses alike.

OpenAI’s Investment in Cybersecurity

OpenAI, a leader in the generative AI sector, recognizes these threats. Recently, the company made a significant investment in a cybersecurity startup, Adaptive Security, which focuses on defending against AI-driven attacks. The $43 million funding, co-led by OpenAI’s startup fund and venture capital firm Andreessen Horowitz, marks OpenAI’s inaugural investment in a cybersecurity firm.

Adaptive Security: Training Against AI Threats

Based in New York, Adaptive Security utilizes AI-generated simulations to educate employees on potential threats. These simulations can mimic various forms of communication, including phone calls, texts, and emails. For example, an employee might receive a call that seems to be from their Chief Technology Officer asking for sensitive information, but it’s actually a realistic imitation crafted by Adaptive Security.

Key Features of Adaptive Security’s Platform

  • Phone Spoofing: Employees may hear phone calls that appear legitimate but are actually generated by the system.
  • Text and Email Simulations: The platform also includes fake texts and emails to train users in identifying red flags.
  • Vulnerability Assessment: It assesses which areas of a company are most exposed to these types of threats and trains staff to recognize risks.

The focus on "social engineering" tactics is crucial, as these attacks often require a human response, such as clicking on harmful links. Despite seeming basic, social engineering can lead to significant financial losses. For instance, Axie Infinity experienced a financial setback of over $600 million due to such an attack linked to a deceptive job offer.

The Growing Threat Landscape

According to Brian Long, co-founder and CEO of Adaptive Security, AI tools have made it remarkably easier for attackers to conduct social engineering hacks. Since its launch in 2023, Adaptive Security has garnered over 100 clients, with positive feedback from those companies attracting additional investment from OpenAI.

Brian Long’s Background

Brian Long has a history in entrepreneurship, having previously founded two successful companies:

  • TapCommerce: A mobile advertising startup sold to Twitter in 2014 for over $100 million.
  • Attentive: An ad-tech firm valued at over $10 billion in 2021.

Long plans to use the funding for hiring engineers to advance Adaptive Security’s product, emphasizing the need for robust defenses against increasingly sophisticated cyber threats.

Other Players in the Cybersecurity Space

Adaptive Security is not alone in focusing on the challenges presented by AI in cybersecurity. Other startups are also emerging to address these concerns:

  • Cyberhaven: Recently raised $100 million to help protect sensitive information from being misused in AI tools.
  • Snyk: Observed a rise in demand driven by vulnerabilities in AI-generated code.
  • GetReal: A startup dedicated to detecting deepfakes, which recently received $17.5 million in funding.

Advice for Employees

As the threat of AI-assisted attacks grows, Brian Long offers a simple piece of advice for employees concerned about voice cloning and other AI-based threats: "Delete your voicemail." Taking proactive steps can help mitigate risks associated with advanced cyber threats.

Please follow and like us:

Related