I tested Manus against DeepSeek using 7 Gemini prompts — here’s the outcome.

Comparing AI Chatbots: Manus vs. DeepSeek
Introduction to Manus and DeepSeek
DeepSeek made a significant impact on the AI chatbot landscape after its launch, quickly becoming a competitive alternative to existing models. This open-source AI is used by various platforms like Grok, Perplexity, and ElevenLabs. Recently, another contender named Manus has emerged from a Chinese startup called Butterfly Effect, based in Wuhan. Since its launch just a week ago, Manus has garnered considerable interest, though only a tiny fraction of its nearly 2 million waitlisted users have had the chance to try it.
To better understand how Manus stacks up against DeepSeek, I had Gemini generate seven prompts to put both chatbots through their paces. Here’s a breakdown of their performance across various tasks.
1. Complex Reasoning
Prompt
"Imagine a world where gravity suddenly reverses for one hour each day. Describe the societal and technological adaptations necessary. Then, write a short story (around 300 words) about a character experiencing this for the first time, focusing on their reactions."
Performance
Manus produced a narrative that felt somewhat textbook-like and lengthy, despite having strong emotional descriptions. In contrast, DeepSeek offered a more concise and imaginative tale, allowing readers to better visualize the changing societal dynamics during gravity shifts.
Winner: DeepSeek for its immersive storytelling.
2. Coding
Prompt
"Write a Python function that takes a list of strings and returns a dictionary of their unique lengths, with explanations and performance optimizations."
Performance
Manus provided more detailed explanations, making it accessible for beginners, but its responses included some repetition that affected efficiency. DeepSeek, however, not only explained its code clearly but also delved deeper into performance improvements, making it a more condensed and informative answer.
Winner: DeepSeek for clearer and more efficient explanations.
3. Data Analysis
Prompt
"Given a dataset of daily temperatures in Celsius, calculate the mean, median, and standard deviation. Interpret the data."
Performance
Manus enumerated additional metrics like temperature range and number of days exceeding the mean, offering deeper insights into trends. DeepSeek provided clear calculations but didn’t segment the data meaningfully for analysis.
Winner: Manus for its thorough trend analysis and additional insights.
4. Multilingual Translation
Prompt
"Translate ‘Don’t count your chickens before they hatch’ into Spanish, French, and Japanese. Explain cultural nuances."
Performance
Manus excelled with a detailed discussion on the Spanish translation’s cultural contexts related to sports and business. DeepSeek’s response was solid but less elaborate.
Winner: Manus for its cultural depth and relevance.
5. Open-ended Discussion
Prompt
"Discuss the ethical implications of AI in creative arts."
Performance
Manus provided an in-depth analysis covering economic shifts and roles of artists, while DeepSeek’s response was somewhat surface-level in comparison.
Winner: Manus for a more comprehensive discussion.
6. Real-time Information Retrieval
Prompt
"Summarize recent scientific discoveries about exoplanet atmospheres from the last 30 days."
Performance
Manus delivered an up-to-date, detailed summary with specifics. DeepSeek acknowledged its cutoff in October 2023, resulting in disqualification from this prompt.
Winner: Manus for providing current information.
7. Scenario-based Problem Solving
Prompt
"Devise a strategic plan for an independent bookstore facing competition from an online retailer."
Performance
Manus outlined a well-structured plan with implementation phases, emphasizing cross-functional strategies. DeepSeek provided creative ideas but lacked a detailed budget.
Winner: Manus for its integrated and long-term approach.
Overall Performance Summary
After thorough testing across various prompts, it’s clear that Manus generally outperforms DeepSeek. While DeepSeek excelled in creativity and short-term solutions, Manus demonstrated robust, well-organized, and practical responses that consistently addressed the complexity of the tasks. Its ability to synthesize information into structured solutions gives it a notable edge in this comparative analysis.