There is not enough information in the search results to directly answer the query about the computational trade-offs of using Anthropic's hybrid model. However, the search results do provide some information about Anthropic's models and computational strategies.
Anthropic uses Constitutional AI (CAI) extensively in their Claude models, which involves synthetic data[2]. They are considered a leader in synthetic data usage[2]. The addition of more context into the process makes the prerequisite capabilities of the model much higher for critique synthetic data, and the systems are harder to engineer[2]. It is inferred by Anthropic that a lot of their methods don't really work without their reward model / RLHF capabilities at 50B parameters and up[2].
Running state-of-the-art models is expensive in terms of infrastructure and compute, and Anthropic needs investment and revenue streams to work on current and future models[5].
Citations:[1] https://www.reddit.com/r/singularity/comments/1dyhjwc/the_problem_with_anthropic/
[2] https://www.interconnects.ai/p/llm-synthetic-data
[3] https://ailabwatch.org/companies/anthropic/
[4] https://www.techrepublic.com/article/anthropic-claude-large-language-model-research/
[5] https://www.lesswrong.com/posts/MNpBCtmZmqD7yk4q8/my-understanding-of-anthropic-strategy
[6] https://www.inc.com/ben-sherry/anthropic-just-announced-its-most-advanced-ai-model-yet-these-are-its-top-use-cases.html
[7] https://research.contrary.com/company/anthropic
[8] https://cloud.google.com/blog/products/ai-machine-learning/anthropics-claude-3-opus-and-tool-use-are-generally-available-on-vertex-ai