GPT-4 is better than the chat-focussed GPT3.5-Turbo for data-related operations, less likely to hallucinate, allows much simpler prompting and is better at natural language inference.
There are also variations of the 3.5. and 4 models that support larger payloads that could be used to apply RAG principles to much larger documents.
Would be great to deploy those models on the backend and allow selection of the associated deployment from the GPT action itself so user can pick and choose which to implement depending on use case.