** AVAILABLE FROM JULY / AUGUST 2025 **
Overview
The 'AI – Run Prompt (Text)' The 'AI – Run Prompt (Text)' Power Automate action leverages artificial intelligence (Azure Open) to interpret and respond to user defined prompts, returning both the response and the current 'Conversation'.
Credit Count
The credit count is determined using the following calculation:
***
Subscription Availability
The 'AI – Run Prompt (Text)' flow action is available in all Power Automate regions and paid Encodian subscription plans.
Default Parameters
The default 'AI – Run Prompt (Text)' flow action parameters are detailed below:
- Model: Select the Open AI Model
- Prompt: The prompt for processing the 'Text' value provided
- Conversation: A JSON representation of the Open AI conversation for the associated chat session
Advanced Parameters
The advanced 'AI – Run Prompt (Text)' flow action parameters are detailed below:
- Frequency Penalty: A value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text. Positive values will make tokens less likely to appear as their frequency increases and decrease the likelihood of the model repeating the same statements verbatim. Supported range is [-2, 2]
- Maximum Output Tokens: Set the maximum limit for the number of tokens that can be generated for output tokens
- Presence Penalty: A value that influences the probability of generated tokens appearing based on their existing presence in generated text. Positive values will make tokens less likely to appear when they already exist and increase the model's likelihood to output new topics. Supported range is [-2, 2]
- Temperature: The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict. Supported range is [0, 1]
Return Parameters
The 'AI – Run Prompt (Text)' flow action returns the following data.
Action Specific Return Values
- Message - The Open AI response message
- Conversation - The Open AI conversation as JSON
Standard Return Values
- OperationId - The unique ID assigned to this operation.
- HttpStatusCode - The HTTP Status code for the response.
- HttpStatusMessage - The HTTP Status message for the response.
- Errors - An array of error messages should an error occur.
- Operation Status - Indicates whether the operation has been completed, has been queued or has failed.
Error Handling - 'The translation request has been throttled'
Potentially at times of peak load the following error could be generated:
"The translation request has been throttled, please contact support@encodian.com"
This will result in delayed processing execution. For paying customers who have a 'Large' or higher subscription level, a dedicated processing endpoint can be requests via support@encodian.com which will help to alleviate this issue.
0 Comments