Generative Search
Generative Search empowers your project by generating dynamic responses based on the existing data. It provides a flexible way to create content, answer questions, and draw connections between data objects in your collection.
How It Works
The Generative module introduces a generate {}
operator which you can use within the GraphQL _additional {}
property in your Get {}
queries. This operator is your gateway to creating custom responses and summaries based on the returned data.
Examples of Use Cases
Use Case | Template | Description |
---|---|---|
Content Summarization | Summarize the following in a tweet: {summary} | You can use this template to generate concise summaries that fit into a tweet. |
Data Comparison | Explain why these results are similar to each other | It is ideal for drawing connections and understanding the commonalities between different data objects. |
Generative Search Engines
- OpenAI
OpenAI’s models are adept at generating coherent and contextually relevant text, making them ideal for summarizing content, answering questions, and providing explanations within the generative search context. Their capability to understand nuanced queries and generate human-like responses enhances the user experience.
- Use Case: When a user asks to summarize a group of documents or explain the similarity between different search results, OpenAI models can generate concise and insightful responses.
- Models:
Model Description gpt-3.5-turbo Recommended for general-purpose generative search tasks, balancing cost and performance. gpt-3.5-turbo-16k Suited for tasks requiring an extended context window. gpt-4 Ideal for complex queries that require deep understanding and elaborate responses. gpt-4-32k Best for extremely detailed and context-rich generative search tasks.
- Cohere
Cohere’s models excel in producing fluent and grammatically correct responses, and they're continuously updated to ensure current, up-to-date language generation.
- Use Case: Cohere is useful for generating responses where linguistic accuracy and fluency are paramount, such as when creating detailed explanations or summaries in a professional context.
- Models:
Model Description command-xlarge-nightly Best for tasks requiring the most up-to-date language understanding. command-xlarge-beta Provides a balance between stability and recency. command-xlarge Recommended for production use where stability is crucial.
- PaLM
PaLM's models, with their high token limits, are particularly suited for tasks that require processing large amounts of text or generating detailed responses. They provide contextually rich and nuanced answers.
- Use Case: PaLM is ideal for generating comprehensive responses to complex queries, especially when the user needs a deep dive into a specific topic based on the search results.
- Models:
Model Description chat-bison The go-to model for extensive generative search tasks, offering a balance between length of response and contextual awareness.