1. due to various reasons above now the best way to use LLM is as a tool picker and occationally tool builder rather than executors. Ideally it should convert human tasks to code so the intermidiary steps can be verified.
2. I think due to both error rate compound and cost compound that right now agents are just toys for hobbiests. Now a lot of solutions revolves around adding more prompt and more calls to openai, which I don't see them reasonable other than for high value high latency-tolerance tasks.
A couple of things come to mind:
1. due to various reasons above now the best way to use LLM is as a tool picker and occationally tool builder rather than executors. Ideally it should convert human tasks to code so the intermidiary steps can be verified.
2. I think due to both error rate compound and cost compound that right now agents are just toys for hobbiests. Now a lot of solutions revolves around adding more prompt and more calls to openai, which I don't see them reasonable other than for high value high latency-tolerance tasks.