Hi, thanks for posting this and share with others your reviews and perspective on Wren AI.
I am one of core members of Wren AI. Regarding to your "dislike about Wren AI", may I ask you two follow-up questions here:
1. Even using OpenAI and Anthropic models, it was pretty slow to respond on a top end computer (CPU only) -> Could you elaborate more on this? And what's the expected latency for you between asking a question and getting response back?
2. Did not work well with the JSON data schema. I wish for better support for unstructured data. -> I would also want to hear your use cases here.
Hey, thanks for the follow up. Sharing more details as you asked
1. It timed out for many of the complex queries and it took more than 30 seconds for one of the simple query on the line of "Explain schema of table x" or "What is the purpose of table x". Used with gpt-4o-mini llm, postgres db. My latency expectations are anchored by the time the target LLM takes to answer the same query, add some more ms at max and that's the time I expect it to take to respond. The latency has to be lower than that for me to adopt the solution. Think about what cursor has done, the chat takes almost same time as it would on gpt/claude directly, and autocomplete will take way less than the gpt/claude (magic moment).
2. Use case: I dump all the user events data into postgres warehouse. The dumped data has json data type column which has most of the useful info. In the next stage, I use dbt to create data model on top of this data - facts, dimensions, etc. as a view. This is what I was doing before discovering WrenAI. Seeing wren's data modeling feature and power of LLM to understand unstructured data, I was tempted to improve this process as following - ask wrenai to explain the unstructured data I have, test hypothesis of potential facts/dimensions views to derive some meaningful BI, reuse this model to create a dashboard. But it failed in the first step itself. I guess the reason could have been either the lack of context you add to the query or adding too much context. If I ask gpt independently to do the same task by sharing a few sample rows, it does help to certain extent. I expected even better output from wren where it can potentially add even more context to deliver better outcome. Unfortunately, many times it timed out, and some times it just returned totally gibberish output which did not seem to have any understanding of the data that might be available in the json data column.
Hi, thanks for posting this and share with others your reviews and perspective on Wren AI.
I am one of core members of Wren AI. Regarding to your "dislike about Wren AI", may I ask you two follow-up questions here:
1. Even using OpenAI and Anthropic models, it was pretty slow to respond on a top end computer (CPU only) -> Could you elaborate more on this? And what's the expected latency for you between asking a question and getting response back?
2. Did not work well with the JSON data schema. I wish for better support for unstructured data. -> I would also want to hear your use cases here.
Hey, thanks for the follow up. Sharing more details as you asked
1. It timed out for many of the complex queries and it took more than 30 seconds for one of the simple query on the line of "Explain schema of table x" or "What is the purpose of table x". Used with gpt-4o-mini llm, postgres db. My latency expectations are anchored by the time the target LLM takes to answer the same query, add some more ms at max and that's the time I expect it to take to respond. The latency has to be lower than that for me to adopt the solution. Think about what cursor has done, the chat takes almost same time as it would on gpt/claude directly, and autocomplete will take way less than the gpt/claude (magic moment).
2. Use case: I dump all the user events data into postgres warehouse. The dumped data has json data type column which has most of the useful info. In the next stage, I use dbt to create data model on top of this data - facts, dimensions, etc. as a view. This is what I was doing before discovering WrenAI. Seeing wren's data modeling feature and power of LLM to understand unstructured data, I was tempted to improve this process as following - ask wrenai to explain the unstructured data I have, test hypothesis of potential facts/dimensions views to derive some meaningful BI, reuse this model to create a dashboard. But it failed in the first step itself. I guess the reason could have been either the lack of context you add to the query or adding too much context. If I ask gpt independently to do the same task by sharing a few sample rows, it does help to certain extent. I expected even better output from wren where it can potentially add even more context to deliver better outcome. Unfortunately, many times it timed out, and some times it just returned totally gibberish output which did not seem to have any understanding of the data that might be available in the json data column.
I hope this helps