I Tested Manus Against ChatGPT with 5 Prompts — Here Are the Results

I Tested Manus Against ChatGPT with 5 Prompts — Here Are the Results

The Rise of Manus: A New AI Agent in the Market

Following its introduction just last week, Manus, developed by Wuhan-based startup Butterfly Effect, has made a significant impact online. With over 2 million individuals expressing interest and joining the waitlist, the AI community is buzzing with excitement. Manus has been referred to as the world’s first general AI agent, which sets it apart from typical AI chatbots such as ChatGPT and others.

What Makes Manus Different?

Multiple AI Models

Unlike traditional AI systems that rely on a single large language model, Manus utilizes several models, including Anthropic’s Claude 3.5 Sonnet and variations of Alibaba’s open-source Qwen. This multi-model approach enables Manus to perform a wider range of tasks, not just simple conversations.

Independent Functioning Agents

Manus also leverages a network of independently functioning agents, allowing it to operate autonomously on various tasks. This characteristic enhances its capabilities beyond traditional chatbot functions.

Early User Impressions

Despite the high demand, only a small fraction of users—less than 1% of the waitlist—have had the opportunity to experience Manus firsthand. Some users who have trialed it took the chance to compare it against ChatGPT, leading to interesting findings.

Prompt Comparisons: Manus vs. ChatGPT

To gauge the effectiveness of Manus, users conducted side-by-side testing using five distinct prompts. Here’s a breakdown of the results:

1. Complex Problem Solving

  • Prompt: Analyze the potential economic impacts of implementing a universal basic income in a developed country.
  • ChatGPT: Provided clear and structured insights with a balanced discussion of pros and cons, suitable for a general audience.
  • Manus: Offered a deep dive that included various economic perspectives but leaned towards high-level academic analysis. While thorough, the response took nearly an hour to generate.
  • Winner: ChatGPT for its accessibility and quicker delivery.

2. Creative Content Generation

  • Prompt: Compose a poem capturing the essence of autumn in a metropolitan city.
  • ChatGPT: Created a rhythmic poem with vivid imagery reflecting the beauty of autumn within an urban landscape.
  • Manus: Delivered an extensive free-verse poem, filled with sensory details and metaphors, presenting a deep exploration.
  • Winner: Manus for its rich, immersive details.

3. Technical Explanation

  • Prompt: Explain blockchain technology to a non-technical audience.
  • ChatGPT: Offered a concise explanation, focusing on simplicity and accessibility.
  • Manus: Provided a comprehensive breakdown, including history and applications, making the subject understandable but more complex.
  • Winner: Manus for a detailed but digestible explanation.

4. Ethical Dilemma

  • Prompt: Discuss ethical considerations in using AI for surveillance.
  • ChatGPT: Used practical examples and provided actionable solutions, though lacking in theoretical depth.
  • Manus: Delved into multiple viewpoints and regulatory strategies but lacked a succinct summary.
  • Winner: Manus for depth of analysis.

5. Advanced Reasoning

  • Prompt: Two cyclists traveling towards each other: determine the meeting point and time.
  • ChatGPT: Delivered a clear and concise answer with a helpful recap.
  • Manus: Also arrived at the right conclusion, providing thorough labeling for clarity.
  • Winner: Manus for ensuring comprehensive accuracy.

Overall Performance

Evaluating all tasks, Manus tends to provide detailed, well-researched answers that illustrate its reasoning processes. However, these responses often take longer and may overwhelm users looking for quicker replies, especially when not all questions require such depth.

User Experience

While Manus showcases impressive capabilities, its extensive nature may not always be practical for regular use. Comparisons indicate that while Manus excels in providing detail, ChatGPT is often more accessible and user-friendly for everyday inquiries. As many users evaluate which AI tool to adopt regularly, Manus, despite its strengths, may not find a prominent place in every user’s toolkit.

Please follow and like us:

Related