OpenAI Launches IDV Screening for Organizations

OpenAI’s New Verification Process for Organizations
OpenAI, a leader in artificial intelligence technology, recently announced that nearly 10% of the global population—approximately 800 million users—are accessing its generative AI systems, as stated by CEO Sam Altman during a TED 2025 event. In light of this expansive reach, OpenAI is now introducing a verification process designed to ensure the secure use of its platforms by organizations.
What is the Verified Organization Process?
The newly launched Verified Organization process allows organizations to gain access to more advanced models and features on the OpenAI platform. This step emphasizes the importance of responsible usage of artificial intelligence tools while still making them available to a wider audience.
Requirements for Verification
To be verified, organizations must submit a valid government-issued ID from a country that is supported by OpenAI’s API. It’s important to note:
- Unique ID Submission: Each ID can only verify one organization within a 90-day period.
- Eligibility Concerns: Not all organizations may qualify for this verification process.
This move is particularly significant following OpenAI’s decision to ban users from countries like China and North Korea, where the technology might be misused for harmful activities, including surveillance and influence campaigns.
Reason Behind the Verification
According to OpenAI, a small percentage of developers have misused its APIs, falling against the guidelines set by the company. By adding a verification procedure, OpenAI aims to decrease the likelihood of dangerous applications of its technology. The company has stated on its website, “We’re adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”
Ongoing Security Measures
OpenAI has previously collaborated with identity verification company Persona to enhance its security protocols. This partnership began when the company had reached 100 million users around the globe. The integration of Persona’s identity verification tools has allowed OpenAI to screen new users against over 100 international sanctions and warning lists across more than 220 countries and territories.
How it Works
- Screening Process: OpenAI uses Persona to verify the identities of new sign-ups, ensuring they adhere to international standards and regulations.
- Further Verification: As needed, the system can prompt for additional verifications or manual review, depending on the circumstances.
Jake Brill, head of the integrity product team at OpenAI, has explained in a blog post that these verification measures contribute to maintaining the integrity of the platform while expanding its availability to compliant and trustworthy organizations.
Implications for Users and Developers
The implementation of this verification system may have several implications for users and developers:
- Increased Trust: Organizations that complete the verification can assure stakeholders of their legitimacy and commitment to ethical AI use.
- Access to Advanced Features: Verified organizations will gain access to cutting-edge tools that can help improve their applications and services.
- Clarification of Responsibilities: Organizations must ensure they comply with usage policies to avoid loss of access to the platform.
OpenAI’s verification process reflects a growing commitment to responsible AI development and use. By establishing a framework that emphasizes oversight and accountability, OpenAI aims to cultivate a safer environment for both developers and end-users while continuing to innovate in the AI space.
As the landscape of artificial intelligence evolves, such measures are essential to balance innovation with the need for security and ethical use, ensuring that technology serves humanity in positive ways.