SplxAI Secures Funding to Oversee AI Development: Explore Their Pitch Deck.

Understanding AI Vulnerabilities and the Need for Protecting Systems
AI Adoption in the Corporate World
In today’s fast-paced corporate landscape, companies are eager to harness the potential of artificial intelligence (AI) to boost productivity and enhance profits. However, the increasing reliance on AI systems raises significant concerns about their security. Businesses are rightfully wary of the potential for chatbots and AI systems to malfunction or be manipulated maliciously.
The Threat Landscape for AI
AI systems are exposed to various threats, including:
- Data Poisoning: This occurs when harmful data is introduced into the training sets, leading to flawed AI behavior.
- Adversarial Attacks: These attacks are specifically designed to deceive AI systems into making incorrect decisions.
A survey conducted by the World Economic Forum in 2023 highlighted that over half of surveyed business leaders believed that generative AI would offer cybercriminals an advantage over defenders in the next two years. This sentiment has proven accurate, as evidenced by a recent survey from Accenture, where 80% of bank cybersecurity executives reported that generative AI is enabling hackers to outpace the banks’ defenses.
Innovations in AI Vulnerability Testing
In response to the growing risks associated with AI, Croatian cybersecurity startup SplxAI aims to change the game regarding how companies assess the vulnerabilities in their AI systems. Recently, they secured $7 million in seed funding, enabling them to focus on preemptively identifying and mitigating threats.
One traditional method for testing AI security is known as red-teaming. This process simulates potential attacks on an AI system but can often take several weeks or months. As companies hurry to verify their AI tools, SplxAI’s CEO, Kristian Kamber, emphasized the need for a quicker response, leading to their platform’s innovative offerings.
Customized Security Assessments
Before using SplxAI’s services, clients fill out a questionnaire designed to understand their specific risks. The questions can include:
- "Are there particular queries your chatbot should avoid responding to?"
- "Which components of the system prompt are confidential and should be protected?"
For instance, a chatbot targeting Generation Z highlights the need for adaptable language, even including swearing to connect with younger users.
Conducting Vulnerability Tests
Once SplxAI has a clear understanding of a client’s needs, it carries out an array of tests. The platform can execute over 2,000 attacks and 17 scans in less than an hour, investigating for issues like:
- Prompt Injection Attacks: Where malicious prompts are introduced to test the system’s responses for harmful content.
- Checks for Bias or Misuse: Testing if the AI is inadvertently biased or could be used for harmful purposes.
Kamber noted that these comprehensive assessments have unveiled various biases and vulnerabilities, highlighting significant risks in widely-used technology.
Real-Life Impacts of Testing
SplxAI has made noteworthy discoveries:
- A workplace productivity tool was found to allow data leaks among colleagues.
- Chatbots in pharmacies provided incorrect medical instructions, creating dangerous scenarios for patients.
- Gender bias was identified in a career advice chatbot that directed young women toward traditionally female roles, such as secretaries, while encouraging young men toward leadership roles.
After completing the tests, SplxAI produces a detailed report outlining vulnerabilities and proposed remedies. However, they also modify system prompts to reinforce security—what Kamber refers to as “hardening.” This proactive measure has become a significant aspect of their business, as mere testing without actionable solutions would limit customer interest.
Addressing Regional Sensitivities
SplxAI collaborated with an Arabic chatbot popular in the Middle East to secure it against discussions about sensitive topics, such as criticisms of local leaders. By strengthening the system’s prompt, users are unable to ask suggestive or inappropriate questions.
The Growing Need for Robust AI Security
Many companies are now focused on securing several AI agents or applications simultaneously to automate intricate tasks. Recognizing the urgency of addressing these vulnerabilities, SplxAI launched "Agentic Radar," an open-source tool dedicated to mapping out vulnerabilities across operations involving multiple AI agents.
Kamber expressed surprise at the quick acknowledgment of AI threats in the industry, noting that awareness has dramatically increased within a short period. Businesses that were previously unaware of the need for security measures are now actively seeking solutions and support to protect their AI systems.
By shifting the focus from reactive to proactive security measures, companies can better prepare their AI systems against an ever-evolving landscape of cyber threats.