HomeArtificial IntelligenceChinese startup manus challenges chatt in data visualization: Which should firms use?

Chinese startup manus challenges chatt in data visualization: Which should firms use?

The promise sounds almost too good to be true: Let a CSV file (Chaosy Comma separated value) fall into an AI agent, wait two minutes and get back a sophisticated, interactive diagram that is prepared on your next board presentation.

But precisely the Chinese startups start MAnus.Im delivers with its latest data visualization function, which was began this month.

Unfortunately, my initial practical tests with damaged data sets show a basic company problem: impressive functions paired with inadequate transparency to data conversions. While manus deals with messy data higher than chatt, no tool for seats remains to be ready.

The spreadsheet problems that Enterprise Analytics is plagued

Rossum ' The survey of 470 financial leaders showed that, despite the possession of BI licenses, 58% are still mainly depending on Excel for monthly KPIs. Another Techradar The study estimates that the dependency on the table affects a complete of around 90% of the organizations, which creates a “data problem within the last mile” between governed and hasty CSV exports, which find yourself hours before critical meetings within the inbox of the analysts.

Manus goals at this exact gap. Upload your CSV, describe what you wish within the natural language, and the agent mechanically cleans the information, selects the corresponding Vega-Lite grammar and returns a PNG diagram that is prepared for export-no speed tables.

Where Manus Chatgpt beats: 4x slower, but more precisely with messy data

I tested each the prolonged data evaluation from Manus and Chatgpt with three data records (113k series ” E -Commerce orders200k series Marketing funnel 10K order Saas MRR), first clean after which damaged with a 5% error injection including zero, mixed format data and duplicates.

Tool Data quality Time Cleans zero Parsed data Treat duplicates Comments
Manus Clean 1:46 N/A N/A Right trend, standard presentation, but fallacious numbers
Manus Messy 3:53 Correct trend despite inaccurate data
Chatt Clean 0:57 N/A N/A Fast but fallacious visualization
Chatt Messy 0:59 Wrong trend from impure data

For the context: Deepseek was only capable of process 1% of the file size, while Claude and GROK each lasted over 5 minutes, but produced interactive diagrams without PNG export options.

Outputs:

Manus behaves like a careful junior analyst – Clean up data mechanically before the diagram, successfully analyze inconsistencies and edit zero without explicit instructions. When I applied for a similar evaluation of the sales trend to be damaged, Manus took almost 4 minutes, but generated a coherent visualization despite the information quality problems.

Chatgpt works like a speed code – Prioritization of the fast edition in comparison with data hygiene. The same request only lasted 59 seconds, but resulted in misleading visualizations since it didn’t mechanically clean the formatting of inconsistencies.

However, each tools failed in relation to the “willingness to execute”. Neither produced the scaling axis or legible labels with out a follow-up entry request. Data names were often overlapping or too small, the bar diagrams were missing the correct grid lines and the number formatting was inconsistent.

The transparency crisis cannot ignore

Here manus becomes problematic for the introduction of firms: The agent never floats cleansing steps. An auditor who checks the ultimate diagram cannot confirm whether outliers have dropped, assumed or transformed.

If a CFO presents quarterly results based on a diagram created by Manus, what happens when someone asks: “How did you cope with the double transactions from Q2 system integration?” The answer is silence.

Chatgpt, Claude and GROK all show their Python code, although transparency through code check isn’t scalable for business users without programming experience. What firms need is a less complicated examination path that builds trust.

The AI of the Native warehouse

While Manus focuses on CSV uploads, essential platforms construct diagram generation directly in corporate data infrastructures:

Google's Gemini in BigQuery In August 2024, the generation of SQL queries and inline visualizations on live tables at the identical time enabled safety on the series level.

Microsoft's copilot in the material In May 2024, GA achieved the Power BI experience and created visuals in Fabric notebooks while working directly with Lakehouse data records.

Gooddatas you’ve gotten assistantStarted in June 2025, works in customer environments and respects existing semantic models, in order that users ask questions in easy language and receive answers that match predefined metrics and terms and conditions.

These storage solutions completely eliminate the CSV exports, keep the entire data line and use existing safety models benefits of tools for files that match the games.

Critical gaps for the introduction of firms

My tests revealed several blockers:

Live data connectivity Remains absent – Manus only supports uploads of files, without snowflake, bigquery or S3 connections. Manus.Im says that connectors are “on the roadmap” but doesn’t offer a timeline.

Surveillance scout transparency Missing completely. Enterprise data teams need transformation protocols that show exactly how AI cleaned your data and whether the interpretation of the fields is correct.

Export flexibility is proscribed to PNG outputs. While are appropriate for fast folia decks, firms need customizable interactive export options.

The judgment: impressive technology, prematurely for corporate cases for firms prematurely

For SMB executives that drown within the ad hoc CSV evaluation, the drag & drop visualization of manus seems to do the job.

Autonomous data cleansing deals with real disorder that will otherwise require manual preliminary processing and cut the turnaround from hours to minutes if you’ve gotten reasonable data.

In addition, it offers a major running time advantage over Excel or Google leaves, for which manual rotations are required and require significant load times as a result of local calculation restrictions.

Regulated firms with ruled data lakes should, nevertheless, wait for warehouse consumers comparable to Gemini or Fabric Copilot, keep the information inside safety operation and maintain complete lines.

Conclusion: Manus proves that a prompt diagram works and treats impressively messy data. For firms, nevertheless, the query isn’t whether the diagrams look good – it is whether or not you should use your profession on data conversions that you simply cannot check or check. Until AI agents with strict traces of examination can connect on to ruled tables, Excel will proceed to maintain its leading role in quarterly presentations.

Previous article
Next article

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read